2025-09-20 10:04:37.845378 | Job console starting 2025-09-20 10:04:37.862388 | Updating git repos 2025-09-20 10:04:37.932166 | Cloning repos into workspace 2025-09-20 10:04:38.165287 | Restoring repo states 2025-09-20 10:04:38.188493 | Merging changes 2025-09-20 10:04:38.188515 | Checking out repos 2025-09-20 10:04:38.433428 | Preparing playbooks 2025-09-20 10:04:38.964548 | Running Ansible setup 2025-09-20 10:04:43.156428 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-20 10:04:43.936611 | 2025-09-20 10:04:43.936792 | PLAY [Base pre] 2025-09-20 10:04:43.953666 | 2025-09-20 10:04:43.953842 | TASK [Setup log path fact] 2025-09-20 10:04:43.983585 | orchestrator | ok 2025-09-20 10:04:44.000825 | 2025-09-20 10:04:44.000961 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-20 10:04:44.043571 | orchestrator | ok 2025-09-20 10:04:44.055590 | 2025-09-20 10:04:44.055712 | TASK [emit-job-header : Print job information] 2025-09-20 10:04:44.108846 | # Job Information 2025-09-20 10:04:44.109092 | Ansible Version: 2.16.14 2025-09-20 10:04:44.109143 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-20 10:04:44.109192 | Pipeline: post 2025-09-20 10:04:44.109226 | Executor: 521e9411259a 2025-09-20 10:04:44.109255 | Triggered by: https://github.com/osism/testbed/commit/4bc8c296f9c20dd61593797d794377d17a0b23a1 2025-09-20 10:04:44.109287 | Event ID: 3369d01a-9609-11f0-83ba-8b0329b650fc 2025-09-20 10:04:44.118260 | 2025-09-20 10:04:44.118390 | LOOP [emit-job-header : Print node information] 2025-09-20 10:04:44.250922 | orchestrator | ok: 2025-09-20 10:04:44.251202 | orchestrator | # Node Information 2025-09-20 10:04:44.251250 | orchestrator | Inventory Hostname: orchestrator 2025-09-20 10:04:44.251286 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-20 10:04:44.251316 | orchestrator | Username: zuul-testbed06 2025-09-20 10:04:44.251345 | orchestrator | Distro: Debian 12.12 2025-09-20 10:04:44.251378 | orchestrator | Provider: static-testbed 2025-09-20 10:04:44.251408 | orchestrator | Region: 2025-09-20 10:04:44.251438 | orchestrator | Label: testbed-orchestrator 2025-09-20 10:04:44.251466 | orchestrator | Product Name: OpenStack Nova 2025-09-20 10:04:44.251493 | orchestrator | Interface IP: 81.163.193.140 2025-09-20 10:04:44.269010 | 2025-09-20 10:04:44.269145 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-20 10:04:44.749812 | orchestrator -> localhost | changed 2025-09-20 10:04:44.758216 | 2025-09-20 10:04:44.758347 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-20 10:04:45.823000 | orchestrator -> localhost | changed 2025-09-20 10:04:45.848581 | 2025-09-20 10:04:45.848743 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-20 10:04:46.157501 | orchestrator -> localhost | ok 2025-09-20 10:04:46.170555 | 2025-09-20 10:04:46.170740 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-20 10:04:46.195292 | orchestrator | ok 2025-09-20 10:04:46.215004 | orchestrator | included: /var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-20 10:04:46.223018 | 2025-09-20 10:04:46.223117 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-20 10:04:47.650491 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-20 10:04:47.650792 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/work/efda62675dd1481981f34b8801f8b340_id_rsa 2025-09-20 10:04:47.650889 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/work/efda62675dd1481981f34b8801f8b340_id_rsa.pub 2025-09-20 10:04:47.650921 | orchestrator -> localhost | The key fingerprint is: 2025-09-20 10:04:47.650950 | orchestrator -> localhost | SHA256:NsUWxknz6cH1qKjAnGxwmUuIym1o2/MPn0bBR2XI8jw zuul-build-sshkey 2025-09-20 10:04:47.650973 | orchestrator -> localhost | The key's randomart image is: 2025-09-20 10:04:47.651006 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-20 10:04:47.651028 | orchestrator -> localhost | | .oB+ . | 2025-09-20 10:04:47.651050 | orchestrator -> localhost | | . . + *+= o o | 2025-09-20 10:04:47.651071 | orchestrator -> localhost | | . o * = + = . .| 2025-09-20 10:04:47.651091 | orchestrator -> localhost | |..o B = E o o | 2025-09-20 10:04:47.651112 | orchestrator -> localhost | |.+ o O S o o | 2025-09-20 10:04:47.651138 | orchestrator -> localhost | |. + . + o | 2025-09-20 10:04:47.651159 | orchestrator -> localhost | | . o .. . | 2025-09-20 10:04:47.651191 | orchestrator -> localhost | | o o.. | 2025-09-20 10:04:47.651224 | orchestrator -> localhost | | .o+ | 2025-09-20 10:04:47.651246 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-20 10:04:47.651312 | orchestrator -> localhost | ok: Runtime: 0:00:00.928050 2025-09-20 10:04:47.659506 | 2025-09-20 10:04:47.659638 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-20 10:04:47.679802 | orchestrator | ok 2025-09-20 10:04:47.689604 | orchestrator | included: /var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-20 10:04:47.698965 | 2025-09-20 10:04:47.699062 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-20 10:04:47.722666 | orchestrator | skipping: Conditional result was False 2025-09-20 10:04:47.736609 | 2025-09-20 10:04:47.736740 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-20 10:04:48.309312 | orchestrator | changed 2025-09-20 10:04:48.318970 | 2025-09-20 10:04:48.319100 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-20 10:04:48.587877 | orchestrator | ok 2025-09-20 10:04:48.596110 | 2025-09-20 10:04:48.596234 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-20 10:04:49.009451 | orchestrator | ok 2025-09-20 10:04:49.018462 | 2025-09-20 10:04:49.018594 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-20 10:04:49.422467 | orchestrator | ok 2025-09-20 10:04:49.431992 | 2025-09-20 10:04:49.432123 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-20 10:04:49.456579 | orchestrator | skipping: Conditional result was False 2025-09-20 10:04:49.469304 | 2025-09-20 10:04:49.469466 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-20 10:04:49.921275 | orchestrator -> localhost | changed 2025-09-20 10:04:49.936641 | 2025-09-20 10:04:49.936809 | TASK [add-build-sshkey : Add back temp key] 2025-09-20 10:04:50.274100 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/work/efda62675dd1481981f34b8801f8b340_id_rsa (zuul-build-sshkey) 2025-09-20 10:04:50.274370 | orchestrator -> localhost | ok: Runtime: 0:00:00.017136 2025-09-20 10:04:50.281876 | 2025-09-20 10:04:50.282009 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-20 10:04:50.689519 | orchestrator | ok 2025-09-20 10:04:50.696252 | 2025-09-20 10:04:50.696362 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-20 10:04:50.740586 | orchestrator | skipping: Conditional result was False 2025-09-20 10:04:50.799619 | 2025-09-20 10:04:50.799773 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-20 10:04:51.175321 | orchestrator | ok 2025-09-20 10:04:51.186530 | 2025-09-20 10:04:51.186655 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-20 10:04:51.215834 | orchestrator | ok 2025-09-20 10:04:51.223257 | 2025-09-20 10:04:51.223362 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-20 10:04:51.500495 | orchestrator -> localhost | ok 2025-09-20 10:04:51.518507 | 2025-09-20 10:04:51.518675 | TASK [validate-host : Collect information about the host] 2025-09-20 10:04:52.641724 | orchestrator | ok 2025-09-20 10:04:52.674523 | 2025-09-20 10:04:52.674790 | TASK [validate-host : Sanitize hostname] 2025-09-20 10:04:52.752875 | orchestrator | ok 2025-09-20 10:04:52.761563 | 2025-09-20 10:04:52.761711 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-20 10:04:53.302157 | orchestrator -> localhost | changed 2025-09-20 10:04:53.308929 | 2025-09-20 10:04:53.309042 | TASK [validate-host : Collect information about zuul worker] 2025-09-20 10:04:53.739490 | orchestrator | ok 2025-09-20 10:04:53.748276 | 2025-09-20 10:04:53.748426 | TASK [validate-host : Write out all zuul information for each host] 2025-09-20 10:04:54.300603 | orchestrator -> localhost | changed 2025-09-20 10:04:54.311497 | 2025-09-20 10:04:54.311602 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-20 10:04:54.596025 | orchestrator | ok 2025-09-20 10:04:54.603675 | 2025-09-20 10:04:54.603835 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-20 10:05:36.238046 | orchestrator | changed: 2025-09-20 10:05:36.238331 | orchestrator | .d..t...... src/ 2025-09-20 10:05:36.238382 | orchestrator | .d..t...... src/github.com/ 2025-09-20 10:05:36.238419 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-20 10:05:36.238451 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-20 10:05:36.238481 | orchestrator | RedHat.yml 2025-09-20 10:05:36.253611 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-20 10:05:36.253628 | orchestrator | RedHat.yml 2025-09-20 10:05:36.253681 | orchestrator | = 1.53.0"... 2025-09-20 10:05:47.149238 | orchestrator | 10:05:47.149 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-20 10:05:48.960512 | orchestrator | 10:05:48.960 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-20 10:05:49.469155 | orchestrator | 10:05:49.468 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-20 10:05:49.854984 | orchestrator | 10:05:49.854 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-20 10:05:50.750821 | orchestrator | 10:05:50.750 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-20 10:05:51.135853 | orchestrator | 10:05:51.135 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-20 10:05:51.835562 | orchestrator | 10:05:51.835 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-20 10:05:51.835620 | orchestrator | 10:05:51.835 STDOUT terraform: Providers are signed by their developers. 2025-09-20 10:05:51.835626 | orchestrator | 10:05:51.835 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-20 10:05:51.835648 | orchestrator | 10:05:51.835 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-20 10:05:51.835719 | orchestrator | 10:05:51.835 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-20 10:05:51.835791 | orchestrator | 10:05:51.835 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-20 10:05:51.835853 | orchestrator | 10:05:51.835 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-20 10:05:51.835923 | orchestrator | 10:05:51.835 STDOUT terraform: you run "tofu init" in the future. 2025-09-20 10:05:51.835959 | orchestrator | 10:05:51.835 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-20 10:05:51.836025 | orchestrator | 10:05:51.835 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-20 10:05:51.836092 | orchestrator | 10:05:51.836 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-20 10:05:51.836111 | orchestrator | 10:05:51.836 STDOUT terraform: should now work. 2025-09-20 10:05:51.836176 | orchestrator | 10:05:51.836 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-20 10:05:51.836242 | orchestrator | 10:05:51.836 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-20 10:05:51.836299 | orchestrator | 10:05:51.836 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-20 10:05:51.934132 | orchestrator | 10:05:51.933 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-20 10:05:51.934177 | orchestrator | 10:05:51.933 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-20 10:05:52.152876 | orchestrator | 10:05:52.152 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-20 10:05:52.152946 | orchestrator | 10:05:52.152 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-20 10:05:52.152955 | orchestrator | 10:05:52.152 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-20 10:05:52.152961 | orchestrator | 10:05:52.152 STDOUT terraform: for this configuration. 2025-09-20 10:05:52.281473 | orchestrator | 10:05:52.281 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-20 10:05:52.281540 | orchestrator | 10:05:52.281 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-20 10:05:52.422108 | orchestrator | 10:05:52.421 STDOUT terraform: ci.auto.tfvars 2025-09-20 10:05:52.429174 | orchestrator | 10:05:52.429 STDOUT terraform: default_custom.tf 2025-09-20 10:05:52.595997 | orchestrator | 10:05:52.595 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-20 10:05:53.491108 | orchestrator | 10:05:53.489 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-20 10:05:54.032697 | orchestrator | 10:05:54.032 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-20 10:05:54.342081 | orchestrator | 10:05:54.337 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-20 10:05:54.342138 | orchestrator | 10:05:54.337 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-20 10:05:54.342145 | orchestrator | 10:05:54.337 STDOUT terraform:  + create 2025-09-20 10:05:54.342151 | orchestrator | 10:05:54.337 STDOUT terraform:  <= read (data resources) 2025-09-20 10:05:54.342156 | orchestrator | 10:05:54.337 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-20 10:05:54.342160 | orchestrator | 10:05:54.337 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-20 10:05:54.342164 | orchestrator | 10:05:54.337 STDOUT terraform:  # (config refers to values not yet known) 2025-09-20 10:05:54.342169 | orchestrator | 10:05:54.337 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-20 10:05:54.342173 | orchestrator | 10:05:54.337 STDOUT terraform:  + checksum = (known after apply) 2025-09-20 10:05:54.342177 | orchestrator | 10:05:54.337 STDOUT terraform:  + created_at = (known after apply) 2025-09-20 10:05:54.342181 | orchestrator | 10:05:54.337 STDOUT terraform:  + file = (known after apply) 2025-09-20 10:05:54.342185 | orchestrator | 10:05:54.337 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342188 | orchestrator | 10:05:54.337 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342205 | orchestrator | 10:05:54.337 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-20 10:05:54.342209 | orchestrator | 10:05:54.337 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-20 10:05:54.342213 | orchestrator | 10:05:54.337 STDOUT terraform:  + most_recent = true 2025-09-20 10:05:54.342217 | orchestrator | 10:05:54.337 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.342221 | orchestrator | 10:05:54.337 STDOUT terraform:  + protected = (known after apply) 2025-09-20 10:05:54.342224 | orchestrator | 10:05:54.337 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342229 | orchestrator | 10:05:54.337 STDOUT terraform:  + schema = (known after apply) 2025-09-20 10:05:54.342232 | orchestrator | 10:05:54.337 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-20 10:05:54.342236 | orchestrator | 10:05:54.337 STDOUT terraform:  + tags = (known after apply) 2025-09-20 10:05:54.342240 | orchestrator | 10:05:54.337 STDOUT terraform:  + updated_at = (known after apply) 2025-09-20 10:05:54.342244 | orchestrator | 10:05:54.337 STDOUT terraform:  } 2025-09-20 10:05:54.342250 | orchestrator | 10:05:54.337 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-20 10:05:54.342254 | orchestrator | 10:05:54.337 STDOUT terraform:  # (config refers to values not yet known) 2025-09-20 10:05:54.342258 | orchestrator | 10:05:54.337 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-20 10:05:54.342262 | orchestrator | 10:05:54.337 STDOUT terraform:  + checksum = (known after apply) 2025-09-20 10:05:54.342265 | orchestrator | 10:05:54.337 STDOUT terraform:  + created_at = (known after apply) 2025-09-20 10:05:54.342269 | orchestrator | 10:05:54.337 STDOUT terraform:  + file = (known after apply) 2025-09-20 10:05:54.342273 | orchestrator | 10:05:54.337 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342277 | orchestrator | 10:05:54.337 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342280 | orchestrator | 10:05:54.337 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-20 10:05:54.342284 | orchestrator | 10:05:54.337 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-20 10:05:54.342292 | orchestrator | 10:05:54.338 STDOUT terraform:  + most_recent = true 2025-09-20 10:05:54.342296 | orchestrator | 10:05:54.338 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.342300 | orchestrator | 10:05:54.338 STDOUT terraform:  + protected = (known after apply) 2025-09-20 10:05:54.342303 | orchestrator | 10:05:54.338 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342318 | orchestrator | 10:05:54.338 STDOUT terraform:  + schema = (known after apply) 2025-09-20 10:05:54.342322 | orchestrator | 10:05:54.338 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-20 10:05:54.342326 | orchestrator | 10:05:54.338 STDOUT terraform:  + tags = (known after apply) 2025-09-20 10:05:54.342330 | orchestrator | 10:05:54.338 STDOUT terraform:  + updated_at = (known after apply) 2025-09-20 10:05:54.342334 | orchestrator | 10:05:54.338 STDOUT terraform:  } 2025-09-20 10:05:54.342337 | orchestrator | 10:05:54.338 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-20 10:05:54.342345 | orchestrator | 10:05:54.338 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-20 10:05:54.342370 | orchestrator | 10:05:54.338 STDOUT terraform:  + content = (known after apply) 2025-09-20 10:05:54.342374 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 10:05:54.342378 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 10:05:54.342382 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 10:05:54.342386 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 10:05:54.342389 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 10:05:54.342393 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 10:05:54.342397 | orchestrator | 10:05:54.338 STDOUT terraform:  + directory_permission = "0777" 2025-09-20 10:05:54.342401 | orchestrator | 10:05:54.338 STDOUT terraform:  + file_permission = "0644" 2025-09-20 10:05:54.342404 | orchestrator | 10:05:54.338 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-20 10:05:54.342408 | orchestrator | 10:05:54.338 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342412 | orchestrator | 10:05:54.338 STDOUT terraform:  } 2025-09-20 10:05:54.342416 | orchestrator | 10:05:54.338 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-20 10:05:54.342420 | orchestrator | 10:05:54.338 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-20 10:05:54.342423 | orchestrator | 10:05:54.338 STDOUT terraform:  + content = (known after apply) 2025-09-20 10:05:54.342427 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 10:05:54.342431 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 10:05:54.342434 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 10:05:54.342438 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 10:05:54.342442 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 10:05:54.342446 | orchestrator | 10:05:54.338 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 10:05:54.342449 | orchestrator | 10:05:54.338 STDOUT terraform:  + directory_permission = "0777" 2025-09-20 10:05:54.342454 | orchestrator | 10:05:54.338 STDOUT terraform:  + file_permission = "0644" 2025-09-20 10:05:54.342457 | orchestrator | 10:05:54.338 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-20 10:05:54.342461 | orchestrator | 10:05:54.338 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342465 | orchestrator | 10:05:54.338 STDOUT terraform:  } 2025-09-20 10:05:54.342471 | orchestrator | 10:05:54.339 STDOUT terraform:  # local_file.inventory will be created 2025-09-20 10:05:54.342475 | orchestrator | 10:05:54.339 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-20 10:05:54.342478 | orchestrator | 10:05:54.339 STDOUT terraform:  + content = (known after apply) 2025-09-20 10:05:54.342485 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 10:05:54.342489 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 10:05:54.342497 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 10:05:54.342501 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 10:05:54.342505 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 10:05:54.342508 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 10:05:54.342512 | orchestrator | 10:05:54.339 STDOUT terraform:  + directory_permission = "0777" 2025-09-20 10:05:54.342516 | orchestrator | 10:05:54.339 STDOUT terraform:  + file_permission = "0644" 2025-09-20 10:05:54.342520 | orchestrator | 10:05:54.339 STDOUT terraform:  + filename = "inventory.ci" 2025-09-20 10:05:54.342523 | orchestrator | 10:05:54.339 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342527 | orchestrator | 10:05:54.339 STDOUT terraform:  } 2025-09-20 10:05:54.342531 | orchestrator | 10:05:54.339 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-20 10:05:54.342535 | orchestrator | 10:05:54.339 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-20 10:05:54.342539 | orchestrator | 10:05:54.339 STDOUT terraform:  + content = (sensitive value) 2025-09-20 10:05:54.342543 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 10:05:54.342547 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 10:05:54.342550 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 10:05:54.342554 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 10:05:54.342558 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 10:05:54.342562 | orchestrator | 10:05:54.339 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 10:05:54.342566 | orchestrator | 10:05:54.339 STDOUT terraform:  + directory_permission = "0700" 2025-09-20 10:05:54.342569 | orchestrator | 10:05:54.339 STDOUT terraform:  + file_permission = "0600" 2025-09-20 10:05:54.342573 | orchestrator | 10:05:54.339 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-20 10:05:54.342577 | orchestrator | 10:05:54.339 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342580 | orchestrator | 10:05:54.339 STDOUT terraform:  } 2025-09-20 10:05:54.342584 | orchestrator | 10:05:54.339 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-20 10:05:54.342588 | orchestrator | 10:05:54.339 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-20 10:05:54.342592 | orchestrator | 10:05:54.339 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342595 | orchestrator | 10:05:54.339 STDOUT terraform:  } 2025-09-20 10:05:54.342599 | orchestrator | 10:05:54.339 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-20 10:05:54.342608 | orchestrator | 10:05:54.339 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-20 10:05:54.342612 | orchestrator | 10:05:54.339 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342616 | orchestrator | 10:05:54.339 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342620 | orchestrator | 10:05:54.339 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342624 | orchestrator | 10:05:54.339 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342628 | orchestrator | 10:05:54.340 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342631 | orchestrator | 10:05:54.340 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-20 10:05:54.342635 | orchestrator | 10:05:54.340 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342639 | orchestrator | 10:05:54.340 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342645 | orchestrator | 10:05:54.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342649 | orchestrator | 10:05:54.340 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342653 | orchestrator | 10:05:54.340 STDOUT terraform:  } 2025-09-20 10:05:54.342657 | orchestrator | 10:05:54.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-20 10:05:54.342661 | orchestrator | 10:05:54.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 10:05:54.342665 | orchestrator | 10:05:54.340 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342669 | orchestrator | 10:05:54.340 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342672 | orchestrator | 10:05:54.340 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342676 | orchestrator | 10:05:54.340 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342680 | orchestrator | 10:05:54.340 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342684 | orchestrator | 10:05:54.340 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-20 10:05:54.342687 | orchestrator | 10:05:54.340 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342691 | orchestrator | 10:05:54.340 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342695 | orchestrator | 10:05:54.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342699 | orchestrator | 10:05:54.340 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342703 | orchestrator | 10:05:54.340 STDOUT terraform:  } 2025-09-20 10:05:54.342706 | orchestrator | 10:05:54.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-20 10:05:54.342710 | orchestrator | 10:05:54.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 10:05:54.342714 | orchestrator | 10:05:54.340 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342720 | orchestrator | 10:05:54.340 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342724 | orchestrator | 10:05:54.340 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342728 | orchestrator | 10:05:54.340 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342732 | orchestrator | 10:05:54.340 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342735 | orchestrator | 10:05:54.340 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-20 10:05:54.342739 | orchestrator | 10:05:54.340 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342743 | orchestrator | 10:05:54.340 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342746 | orchestrator | 10:05:54.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342750 | orchestrator | 10:05:54.340 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342754 | orchestrator | 10:05:54.340 STDOUT terraform:  } 2025-09-20 10:05:54.342758 | orchestrator | 10:05:54.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-20 10:05:54.342762 | orchestrator | 10:05:54.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 10:05:54.342765 | orchestrator | 10:05:54.341 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342771 | orchestrator | 10:05:54.341 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342775 | orchestrator | 10:05:54.341 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342778 | orchestrator | 10:05:54.341 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342782 | orchestrator | 10:05:54.341 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342786 | orchestrator | 10:05:54.341 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-20 10:05:54.342792 | orchestrator | 10:05:54.341 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342796 | orchestrator | 10:05:54.341 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342800 | orchestrator | 10:05:54.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342803 | orchestrator | 10:05:54.341 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342807 | orchestrator | 10:05:54.341 STDOUT terraform:  } 2025-09-20 10:05:54.342811 | orchestrator | 10:05:54.341 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-20 10:05:54.342815 | orchestrator | 10:05:54.341 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 10:05:54.342819 | orchestrator | 10:05:54.341 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342823 | orchestrator | 10:05:54.341 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342827 | orchestrator | 10:05:54.341 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342830 | orchestrator | 10:05:54.341 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342834 | orchestrator | 10:05:54.341 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342840 | orchestrator | 10:05:54.341 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-20 10:05:54.342844 | orchestrator | 10:05:54.341 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342848 | orchestrator | 10:05:54.341 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342852 | orchestrator | 10:05:54.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342855 | orchestrator | 10:05:54.341 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342859 | orchestrator | 10:05:54.341 STDOUT terraform:  } 2025-09-20 10:05:54.342863 | orchestrator | 10:05:54.341 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-20 10:05:54.342869 | orchestrator | 10:05:54.341 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 10:05:54.342873 | orchestrator | 10:05:54.341 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342876 | orchestrator | 10:05:54.341 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342880 | orchestrator | 10:05:54.341 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342884 | orchestrator | 10:05:54.341 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342887 | orchestrator | 10:05:54.341 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342891 | orchestrator | 10:05:54.341 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-20 10:05:54.342895 | orchestrator | 10:05:54.341 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342899 | orchestrator | 10:05:54.341 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342903 | orchestrator | 10:05:54.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342906 | orchestrator | 10:05:54.341 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342910 | orchestrator | 10:05:54.342 STDOUT terraform:  } 2025-09-20 10:05:54.342914 | orchestrator | 10:05:54.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-20 10:05:54.342918 | orchestrator | 10:05:54.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 10:05:54.342921 | orchestrator | 10:05:54.342 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342925 | orchestrator | 10:05:54.342 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342929 | orchestrator | 10:05:54.342 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342932 | orchestrator | 10:05:54.342 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.342939 | orchestrator | 10:05:54.342 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342943 | orchestrator | 10:05:54.342 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-20 10:05:54.342947 | orchestrator | 10:05:54.342 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.342950 | orchestrator | 10:05:54.342 STDOUT terraform:  + size = 80 2025-09-20 10:05:54.342957 | orchestrator | 10:05:54.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.342964 | orchestrator | 10:05:54.342 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.342968 | orchestrator | 10:05:54.342 STDOUT terraform:  } 2025-09-20 10:05:54.342972 | orchestrator | 10:05:54.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-20 10:05:54.342976 | orchestrator | 10:05:54.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.342985 | orchestrator | 10:05:54.342 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.342988 | orchestrator | 10:05:54.342 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.342992 | orchestrator | 10:05:54.342 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.342996 | orchestrator | 10:05:54.342 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.342999 | orchestrator | 10:05:54.342 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-20 10:05:54.343003 | orchestrator | 10:05:54.342 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.343007 | orchestrator | 10:05:54.342 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.343011 | orchestrator | 10:05:54.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.343014 | orchestrator | 10:05:54.342 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.343018 | orchestrator | 10:05:54.342 STDOUT terraform:  } 2025-09-20 10:05:54.343022 | orchestrator | 10:05:54.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-20 10:05:54.343027 | orchestrator | 10:05:54.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.343031 | orchestrator | 10:05:54.342 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.343035 | orchestrator | 10:05:54.342 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.343057 | orchestrator | 10:05:54.343 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.343264 | orchestrator | 10:05:54.343 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.343800 | orchestrator | 10:05:54.343 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-20 10:05:54.344267 | orchestrator | 10:05:54.343 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.344503 | orchestrator | 10:05:54.344 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.344773 | orchestrator | 10:05:54.344 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.344932 | orchestrator | 10:05:54.344 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.345082 | orchestrator | 10:05:54.344 STDOUT terraform:  } 2025-09-20 10:05:54.345569 | orchestrator | 10:05:54.345 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-20 10:05:54.345905 | orchestrator | 10:05:54.345 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.347422 | orchestrator | 10:05:54.345 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.347464 | orchestrator | 10:05:54.347 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.347469 | orchestrator | 10:05:54.347 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.347489 | orchestrator | 10:05:54.347 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.347524 | orchestrator | 10:05:54.347 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-20 10:05:54.347561 | orchestrator | 10:05:54.347 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.347584 | orchestrator | 10:05:54.347 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.347614 | orchestrator | 10:05:54.347 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.347621 | orchestrator | 10:05:54.347 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.347638 | orchestrator | 10:05:54.347 STDOUT terraform:  } 2025-09-20 10:05:54.347679 | orchestrator | 10:05:54.347 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-20 10:05:54.347720 | orchestrator | 10:05:54.347 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.347754 | orchestrator | 10:05:54.347 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.347777 | orchestrator | 10:05:54.347 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.347812 | orchestrator | 10:05:54.347 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.347843 | orchestrator | 10:05:54.347 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.347881 | orchestrator | 10:05:54.347 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-20 10:05:54.347915 | orchestrator | 10:05:54.347 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.347937 | orchestrator | 10:05:54.347 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.347960 | orchestrator | 10:05:54.347 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.347985 | orchestrator | 10:05:54.347 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.347991 | orchestrator | 10:05:54.347 STDOUT terraform:  } 2025-09-20 10:05:54.348040 | orchestrator | 10:05:54.347 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-20 10:05:54.348080 | orchestrator | 10:05:54.348 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.348115 | orchestrator | 10:05:54.348 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.348139 | orchestrator | 10:05:54.348 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.348173 | orchestrator | 10:05:54.348 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.348208 | orchestrator | 10:05:54.348 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.348245 | orchestrator | 10:05:54.348 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-20 10:05:54.348279 | orchestrator | 10:05:54.348 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.348288 | orchestrator | 10:05:54.348 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.348317 | orchestrator | 10:05:54.348 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.348340 | orchestrator | 10:05:54.348 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.348347 | orchestrator | 10:05:54.348 STDOUT terraform:  } 2025-09-20 10:05:54.348402 | orchestrator | 10:05:54.348 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-20 10:05:54.348444 | orchestrator | 10:05:54.348 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.348478 | orchestrator | 10:05:54.348 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.348513 | orchestrator | 10:05:54.348 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.348564 | orchestrator | 10:05:54.348 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.348614 | orchestrator | 10:05:54.348 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.348673 | orchestrator | 10:05:54.348 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-20 10:05:54.348715 | orchestrator | 10:05:54.348 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.348739 | orchestrator | 10:05:54.348 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.348756 | orchestrator | 10:05:54.348 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.348780 | orchestrator | 10:05:54.348 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.348786 | orchestrator | 10:05:54.348 STDOUT terraform:  } 2025-09-20 10:05:54.348834 | orchestrator | 10:05:54.348 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-20 10:05:54.348876 | orchestrator | 10:05:54.348 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.348912 | orchestrator | 10:05:54.348 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.348935 | orchestrator | 10:05:54.348 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.348969 | orchestrator | 10:05:54.348 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.349004 | orchestrator | 10:05:54.348 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.349041 | orchestrator | 10:05:54.348 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-20 10:05:54.349075 | orchestrator | 10:05:54.349 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.349092 | orchestrator | 10:05:54.349 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.349114 | orchestrator | 10:05:54.349 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.349138 | orchestrator | 10:05:54.349 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.349144 | orchestrator | 10:05:54.349 STDOUT terraform:  } 2025-09-20 10:05:54.349192 | orchestrator | 10:05:54.349 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-20 10:05:54.349232 | orchestrator | 10:05:54.349 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.349267 | orchestrator | 10:05:54.349 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.349290 | orchestrator | 10:05:54.349 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.349327 | orchestrator | 10:05:54.349 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.349382 | orchestrator | 10:05:54.349 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.349420 | orchestrator | 10:05:54.349 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-20 10:05:54.349455 | orchestrator | 10:05:54.349 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.349472 | orchestrator | 10:05:54.349 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.349497 | orchestrator | 10:05:54.349 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.349521 | orchestrator | 10:05:54.349 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.349528 | orchestrator | 10:05:54.349 STDOUT terraform:  } 2025-09-20 10:05:54.349574 | orchestrator | 10:05:54.349 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-20 10:05:54.349615 | orchestrator | 10:05:54.349 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 10:05:54.349649 | orchestrator | 10:05:54.349 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 10:05:54.349673 | orchestrator | 10:05:54.349 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.349710 | orchestrator | 10:05:54.349 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.349746 | orchestrator | 10:05:54.349 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 10:05:54.349783 | orchestrator | 10:05:54.349 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-20 10:05:54.349817 | orchestrator | 10:05:54.349 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.349838 | orchestrator | 10:05:54.349 STDOUT terraform:  + size = 20 2025-09-20 10:05:54.349863 | orchestrator | 10:05:54.349 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 10:05:54.349886 | orchestrator | 10:05:54.349 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 10:05:54.349892 | orchestrator | 10:05:54.349 STDOUT terraform:  } 2025-09-20 10:05:54.349939 | orchestrator | 10:05:54.349 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-20 10:05:54.349979 | orchestrator | 10:05:54.349 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-20 10:05:54.350026 | orchestrator | 10:05:54.349 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.350399 | orchestrator | 10:05:54.350 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.350738 | orchestrator | 10:05:54.350 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.351196 | orchestrator | 10:05:54.350 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.351327 | orchestrator | 10:05:54.351 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.351860 | orchestrator | 10:05:54.351 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.352511 | orchestrator | 10:05:54.351 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.353124 | orchestrator | 10:05:54.352 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.353827 | orchestrator | 10:05:54.353 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-20 10:05:54.354195 | orchestrator | 10:05:54.353 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.356328 | orchestrator | 10:05:54.354 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.356385 | orchestrator | 10:05:54.356 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.356390 | orchestrator | 10:05:54.356 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.358098 | orchestrator | 10:05:54.356 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.358142 | orchestrator | 10:05:54.356 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.358147 | orchestrator | 10:05:54.356 STDOUT terraform:  + name = "testbed-manager" 2025-09-20 10:05:54.358152 | orchestrator | 10:05:54.356 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.358156 | orchestrator | 10:05:54.356 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.358160 | orchestrator | 10:05:54.356 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.358164 | orchestrator | 10:05:54.356 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.358167 | orchestrator | 10:05:54.356 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.358171 | orchestrator | 10:05:54.356 STDOUT terraform:  + user_data = (sensitive value) 2025-09-20 10:05:54.358175 | orchestrator | 10:05:54.356 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.358179 | orchestrator | 10:05:54.356 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.358183 | orchestrator | 10:05:54.356 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.358186 | orchestrator | 10:05:54.356 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.358190 | orchestrator | 10:05:54.356 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.358197 | orchestrator | 10:05:54.356 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.358201 | orchestrator | 10:05:54.356 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.358205 | orchestrator | 10:05:54.356 STDOUT terraform:  } 2025-09-20 10:05:54.358209 | orchestrator | 10:05:54.356 STDOUT terraform:  + network { 2025-09-20 10:05:54.358213 | orchestrator | 10:05:54.356 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.358216 | orchestrator | 10:05:54.356 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.358220 | orchestrator | 10:05:54.356 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.358224 | orchestrator | 10:05:54.356 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.358238 | orchestrator | 10:05:54.356 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.358242 | orchestrator | 10:05:54.356 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.358245 | orchestrator | 10:05:54.356 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.358249 | orchestrator | 10:05:54.356 STDOUT terraform:  } 2025-09-20 10:05:54.358253 | orchestrator | 10:05:54.356 STDOUT terraform:  } 2025-09-20 10:05:54.358257 | orchestrator | 10:05:54.356 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-20 10:05:54.358261 | orchestrator | 10:05:54.357 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 10:05:54.358265 | orchestrator | 10:05:54.357 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.358272 | orchestrator | 10:05:54.357 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.358276 | orchestrator | 10:05:54.357 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.358280 | orchestrator | 10:05:54.357 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.358284 | orchestrator | 10:05:54.357 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.358288 | orchestrator | 10:05:54.357 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.358291 | orchestrator | 10:05:54.357 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.358295 | orchestrator | 10:05:54.357 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.358299 | orchestrator | 10:05:54.357 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 10:05:54.358309 | orchestrator | 10:05:54.357 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.358313 | orchestrator | 10:05:54.357 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.358316 | orchestrator | 10:05:54.357 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.358320 | orchestrator | 10:05:54.357 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.358324 | orchestrator | 10:05:54.357 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.358328 | orchestrator | 10:05:54.357 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.358332 | orchestrator | 10:05:54.357 STDOUT terraform:  + name = "testbed-node-0" 2025-09-20 10:05:54.358336 | orchestrator | 10:05:54.357 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.358340 | orchestrator | 10:05:54.357 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.358343 | orchestrator | 10:05:54.357 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.358347 | orchestrator | 10:05:54.357 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.358361 | orchestrator | 10:05:54.357 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.358365 | orchestrator | 10:05:54.357 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 10:05:54.358369 | orchestrator | 10:05:54.357 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.358376 | orchestrator | 10:05:54.357 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.358380 | orchestrator | 10:05:54.357 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.358383 | orchestrator | 10:05:54.357 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.358387 | orchestrator | 10:05:54.357 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.358391 | orchestrator | 10:05:54.357 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.358395 | orchestrator | 10:05:54.357 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.358398 | orchestrator | 10:05:54.357 STDOUT terraform:  } 2025-09-20 10:05:54.358402 | orchestrator | 10:05:54.357 STDOUT terraform:  + network { 2025-09-20 10:05:54.358406 | orchestrator | 10:05:54.357 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.358409 | orchestrator | 10:05:54.357 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.358413 | orchestrator | 10:05:54.357 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.358417 | orchestrator | 10:05:54.357 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.358421 | orchestrator | 10:05:54.357 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.358424 | orchestrator | 10:05:54.357 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.358428 | orchestrator | 10:05:54.357 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.358432 | orchestrator | 10:05:54.357 STDOUT terraform:  } 2025-09-20 10:05:54.358435 | orchestrator | 10:05:54.358 STDOUT terraform:  } 2025-09-20 10:05:54.358638 | orchestrator | 10:05:54.358 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-20 10:05:54.359531 | orchestrator | 10:05:54.358 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 10:05:54.359803 | orchestrator | 10:05:54.359 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.360059 | orchestrator | 10:05:54.359 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.360165 | orchestrator | 10:05:54.360 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.360523 | orchestrator | 10:05:54.360 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.360641 | orchestrator | 10:05:54.360 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.360966 | orchestrator | 10:05:54.360 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.361317 | orchestrator | 10:05:54.360 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.361669 | orchestrator | 10:05:54.361 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.361974 | orchestrator | 10:05:54.361 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 10:05:54.362414 | orchestrator | 10:05:54.361 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.363030 | orchestrator | 10:05:54.362 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.363080 | orchestrator | 10:05:54.362 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.363085 | orchestrator | 10:05:54.362 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.363090 | orchestrator | 10:05:54.362 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.363094 | orchestrator | 10:05:54.362 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.363097 | orchestrator | 10:05:54.362 STDOUT terraform:  + name = "testbed-node-1" 2025-09-20 10:05:54.363101 | orchestrator | 10:05:54.362 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.363105 | orchestrator | 10:05:54.362 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.363109 | orchestrator | 10:05:54.362 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.363113 | orchestrator | 10:05:54.362 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.363116 | orchestrator | 10:05:54.362 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.363120 | orchestrator | 10:05:54.362 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 10:05:54.363125 | orchestrator | 10:05:54.362 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.363128 | orchestrator | 10:05:54.362 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.363145 | orchestrator | 10:05:54.362 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.363150 | orchestrator | 10:05:54.362 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.363159 | orchestrator | 10:05:54.362 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.363163 | orchestrator | 10:05:54.362 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.363167 | orchestrator | 10:05:54.363 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.363171 | orchestrator | 10:05:54.363 STDOUT terraform:  } 2025-09-20 10:05:54.363175 | orchestrator | 10:05:54.363 STDOUT terraform:  + network { 2025-09-20 10:05:54.363178 | orchestrator | 10:05:54.363 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.363182 | orchestrator | 10:05:54.363 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.363186 | orchestrator | 10:05:54.363 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.363191 | orchestrator | 10:05:54.363 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.363196 | orchestrator | 10:05:54.363 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.363231 | orchestrator | 10:05:54.363 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.363260 | orchestrator | 10:05:54.363 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.363266 | orchestrator | 10:05:54.363 STDOUT terraform:  } 2025-09-20 10:05:54.363284 | orchestrator | 10:05:54.363 STDOUT terraform:  } 2025-09-20 10:05:54.363325 | orchestrator | 10:05:54.363 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-20 10:05:54.363386 | orchestrator | 10:05:54.363 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 10:05:54.363399 | orchestrator | 10:05:54.363 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.363438 | orchestrator | 10:05:54.363 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.363472 | orchestrator | 10:05:54.363 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.363509 | orchestrator | 10:05:54.363 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.363531 | orchestrator | 10:05:54.363 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.363552 | orchestrator | 10:05:54.363 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.363586 | orchestrator | 10:05:54.363 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.363619 | orchestrator | 10:05:54.363 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.363647 | orchestrator | 10:05:54.363 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 10:05:54.363670 | orchestrator | 10:05:54.363 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.363702 | orchestrator | 10:05:54.363 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.363739 | orchestrator | 10:05:54.363 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.363771 | orchestrator | 10:05:54.363 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.363805 | orchestrator | 10:05:54.363 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.363830 | orchestrator | 10:05:54.363 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.363859 | orchestrator | 10:05:54.363 STDOUT terraform:  + name = "testbed-node-2" 2025-09-20 10:05:54.363892 | orchestrator | 10:05:54.363 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.363917 | orchestrator | 10:05:54.363 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.363950 | orchestrator | 10:05:54.363 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.363973 | orchestrator | 10:05:54.363 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.364007 | orchestrator | 10:05:54.363 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.364065 | orchestrator | 10:05:54.364 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 10:05:54.364069 | orchestrator | 10:05:54.364 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.364091 | orchestrator | 10:05:54.364 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.364117 | orchestrator | 10:05:54.364 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.364149 | orchestrator | 10:05:54.364 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.364173 | orchestrator | 10:05:54.364 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.364200 | orchestrator | 10:05:54.364 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.364241 | orchestrator | 10:05:54.364 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.364251 | orchestrator | 10:05:54.364 STDOUT terraform:  } 2025-09-20 10:05:54.364257 | orchestrator | 10:05:54.364 STDOUT terraform:  + network { 2025-09-20 10:05:54.364279 | orchestrator | 10:05:54.364 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.364309 | orchestrator | 10:05:54.364 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.364338 | orchestrator | 10:05:54.364 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.364390 | orchestrator | 10:05:54.364 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.364428 | orchestrator | 10:05:54.364 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.364451 | orchestrator | 10:05:54.364 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.364480 | orchestrator | 10:05:54.364 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.364486 | orchestrator | 10:05:54.364 STDOUT terraform:  } 2025-09-20 10:05:54.364506 | orchestrator | 10:05:54.364 STDOUT terraform:  } 2025-09-20 10:05:54.364547 | orchestrator | 10:05:54.364 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-20 10:05:54.364597 | orchestrator | 10:05:54.364 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 10:05:54.364622 | orchestrator | 10:05:54.364 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.364654 | orchestrator | 10:05:54.364 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.364689 | orchestrator | 10:05:54.364 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.364723 | orchestrator | 10:05:54.364 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.364745 | orchestrator | 10:05:54.364 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.364769 | orchestrator | 10:05:54.364 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.364800 | orchestrator | 10:05:54.364 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.364834 | orchestrator | 10:05:54.364 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.364862 | orchestrator | 10:05:54.364 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 10:05:54.364884 | orchestrator | 10:05:54.364 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.364917 | orchestrator | 10:05:54.364 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.364952 | orchestrator | 10:05:54.364 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.364987 | orchestrator | 10:05:54.364 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.365021 | orchestrator | 10:05:54.364 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.365045 | orchestrator | 10:05:54.365 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.365075 | orchestrator | 10:05:54.365 STDOUT terraform:  + name = "testbed-node-3" 2025-09-20 10:05:54.365102 | orchestrator | 10:05:54.365 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.365135 | orchestrator | 10:05:54.365 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.365168 | orchestrator | 10:05:54.365 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.365191 | orchestrator | 10:05:54.365 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.365226 | orchestrator | 10:05:54.365 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.365277 | orchestrator | 10:05:54.365 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 10:05:54.365284 | orchestrator | 10:05:54.365 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.365309 | orchestrator | 10:05:54.365 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.365334 | orchestrator | 10:05:54.365 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.365373 | orchestrator | 10:05:54.365 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.365399 | orchestrator | 10:05:54.365 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.365429 | orchestrator | 10:05:54.365 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.365467 | orchestrator | 10:05:54.365 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.365473 | orchestrator | 10:05:54.365 STDOUT terraform:  } 2025-09-20 10:05:54.365490 | orchestrator | 10:05:54.365 STDOUT terraform:  + network { 2025-09-20 10:05:54.365510 | orchestrator | 10:05:54.365 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.365539 | orchestrator | 10:05:54.365 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.365568 | orchestrator | 10:05:54.365 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.365598 | orchestrator | 10:05:54.365 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.365628 | orchestrator | 10:05:54.365 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.365658 | orchestrator | 10:05:54.365 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.365688 | orchestrator | 10:05:54.365 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.365694 | orchestrator | 10:05:54.365 STDOUT terraform:  } 2025-09-20 10:05:54.365711 | orchestrator | 10:05:54.365 STDOUT terraform:  } 2025-09-20 10:05:54.365803 | orchestrator | 10:05:54.365 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-20 10:05:54.365844 | orchestrator | 10:05:54.365 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 10:05:54.365879 | orchestrator | 10:05:54.365 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.365913 | orchestrator | 10:05:54.365 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.365952 | orchestrator | 10:05:54.365 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.365987 | orchestrator | 10:05:54.365 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.366010 | orchestrator | 10:05:54.365 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.369597 | orchestrator | 10:05:54.366 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.369636 | orchestrator | 10:05:54.369 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.369642 | orchestrator | 10:05:54.369 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.369675 | orchestrator | 10:05:54.369 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 10:05:54.369697 | orchestrator | 10:05:54.369 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.369731 | orchestrator | 10:05:54.369 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.369765 | orchestrator | 10:05:54.369 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.369803 | orchestrator | 10:05:54.369 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.369838 | orchestrator | 10:05:54.369 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.369860 | orchestrator | 10:05:54.369 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.369892 | orchestrator | 10:05:54.369 STDOUT terraform:  + name = "testbed-node-4" 2025-09-20 10:05:54.369915 | orchestrator | 10:05:54.369 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.369949 | orchestrator | 10:05:54.369 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.369983 | orchestrator | 10:05:54.369 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.370009 | orchestrator | 10:05:54.369 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.370056 | orchestrator | 10:05:54.370 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.370108 | orchestrator | 10:05:54.370 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 10:05:54.370114 | orchestrator | 10:05:54.370 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.370140 | orchestrator | 10:05:54.370 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.370166 | orchestrator | 10:05:54.370 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.370192 | orchestrator | 10:05:54.370 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.370219 | orchestrator | 10:05:54.370 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.370247 | orchestrator | 10:05:54.370 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.370284 | orchestrator | 10:05:54.370 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.370290 | orchestrator | 10:05:54.370 STDOUT terraform:  } 2025-09-20 10:05:54.370306 | orchestrator | 10:05:54.370 STDOUT terraform:  + network { 2025-09-20 10:05:54.370328 | orchestrator | 10:05:54.370 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.370368 | orchestrator | 10:05:54.370 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.370396 | orchestrator | 10:05:54.370 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.370427 | orchestrator | 10:05:54.370 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.370460 | orchestrator | 10:05:54.370 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.370488 | orchestrator | 10:05:54.370 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.370520 | orchestrator | 10:05:54.370 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.370526 | orchestrator | 10:05:54.370 STDOUT terraform:  } 2025-09-20 10:05:54.370540 | orchestrator | 10:05:54.370 STDOUT terraform:  } 2025-09-20 10:05:54.370583 | orchestrator | 10:05:54.370 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-20 10:05:54.370622 | orchestrator | 10:05:54.370 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 10:05:54.370655 | orchestrator | 10:05:54.370 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 10:05:54.370689 | orchestrator | 10:05:54.370 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 10:05:54.370722 | orchestrator | 10:05:54.370 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 10:05:54.370759 | orchestrator | 10:05:54.370 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.370781 | orchestrator | 10:05:54.370 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 10:05:54.370802 | orchestrator | 10:05:54.370 STDOUT terraform:  + config_drive = true 2025-09-20 10:05:54.370836 | orchestrator | 10:05:54.370 STDOUT terraform:  + created = (known after apply) 2025-09-20 10:05:54.370869 | orchestrator | 10:05:54.370 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 10:05:54.370898 | orchestrator | 10:05:54.370 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 10:05:54.370947 | orchestrator | 10:05:54.370 STDOUT terraform:  + force_delete = false 2025-09-20 10:05:54.370954 | orchestrator | 10:05:54.370 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 10:05:54.370985 | orchestrator | 10:05:54.370 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.371019 | orchestrator | 10:05:54.370 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 10:05:54.371061 | orchestrator | 10:05:54.371 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 10:05:54.371094 | orchestrator | 10:05:54.371 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 10:05:54.371125 | orchestrator | 10:05:54.371 STDOUT terraform:  + name = "testbed-node-5" 2025-09-20 10:05:54.371157 | orchestrator | 10:05:54.371 STDOUT terraform:  + power_state = "active" 2025-09-20 10:05:54.371213 | orchestrator | 10:05:54.371 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.371255 | orchestrator | 10:05:54.371 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 10:05:54.371283 | orchestrator | 10:05:54.371 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 10:05:54.371321 | orchestrator | 10:05:54.371 STDOUT terraform:  + updated = (known after apply) 2025-09-20 10:05:54.371380 | orchestrator | 10:05:54.371 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 10:05:54.371397 | orchestrator | 10:05:54.371 STDOUT terraform:  + block_device { 2025-09-20 10:05:54.371421 | orchestrator | 10:05:54.371 STDOUT terraform:  + boot_index = 0 2025-09-20 10:05:54.371447 | orchestrator | 10:05:54.371 STDOUT terraform:  + delete_on_termination = false 2025-09-20 10:05:54.371476 | orchestrator | 10:05:54.371 STDOUT terraform:  + destination_type = "volume" 2025-09-20 10:05:54.371503 | orchestrator | 10:05:54.371 STDOUT terraform:  + multiattach = false 2025-09-20 10:05:54.371535 | orchestrator | 10:05:54.371 STDOUT terraform:  + source_type = "volume" 2025-09-20 10:05:54.371569 | orchestrator | 10:05:54.371 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.371584 | orchestrator | 10:05:54.371 STDOUT terraform:  } 2025-09-20 10:05:54.371590 | orchestrator | 10:05:54.371 STDOUT terraform:  + network { 2025-09-20 10:05:54.371612 | orchestrator | 10:05:54.371 STDOUT terraform:  + access_network = false 2025-09-20 10:05:54.371646 | orchestrator | 10:05:54.371 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 10:05:54.371672 | orchestrator | 10:05:54.371 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 10:05:54.371703 | orchestrator | 10:05:54.371 STDOUT terraform:  + mac = (known after apply) 2025-09-20 10:05:54.371735 | orchestrator | 10:05:54.371 STDOUT terraform:  + name = (known after apply) 2025-09-20 10:05:54.371765 | orchestrator | 10:05:54.371 STDOUT terraform:  + port = (known after apply) 2025-09-20 10:05:54.371792 | orchestrator | 10:05:54.371 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 10:05:54.371807 | orchestrator | 10:05:54.371 STDOUT terraform:  } 2025-09-20 10:05:54.371813 | orchestrator | 10:05:54.371 STDOUT terraform:  } 2025-09-20 10:05:54.371851 | orchestrator | 10:05:54.371 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-20 10:05:54.371883 | orchestrator | 10:05:54.371 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-20 10:05:54.371910 | orchestrator | 10:05:54.371 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-20 10:05:54.371938 | orchestrator | 10:05:54.371 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.371961 | orchestrator | 10:05:54.371 STDOUT terraform:  + name = "testbed" 2025-09-20 10:05:54.371987 | orchestrator | 10:05:54.371 STDOUT terraform:  + private_key = (sensitive value) 2025-09-20 10:05:54.372013 | orchestrator | 10:05:54.371 STDOUT terraform:  + public_key = (known after apply) 2025-09-20 10:05:54.372039 | orchestrator | 10:05:54.372 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.372069 | orchestrator | 10:05:54.372 STDOUT terraform:  + user_id = (known after apply) 2025-09-20 10:05:54.372075 | orchestrator | 10:05:54.372 STDOUT terraform:  } 2025-09-20 10:05:54.372125 | orchestrator | 10:05:54.372 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-20 10:05:54.372174 | orchestrator | 10:05:54.372 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.372202 | orchestrator | 10:05:54.372 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.372229 | orchestrator | 10:05:54.372 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.372258 | orchestrator | 10:05:54.372 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.372284 | orchestrator | 10:05:54.372 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.372311 | orchestrator | 10:05:54.372 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.372317 | orchestrator | 10:05:54.372 STDOUT terraform:  } 2025-09-20 10:05:54.372378 | orchestrator | 10:05:54.372 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-20 10:05:54.372424 | orchestrator | 10:05:54.372 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.372451 | orchestrator | 10:05:54.372 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.372479 | orchestrator | 10:05:54.372 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.372505 | orchestrator | 10:05:54.372 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.372532 | orchestrator | 10:05:54.372 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.372559 | orchestrator | 10:05:54.372 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.372575 | orchestrator | 10:05:54.372 STDOUT terraform:  } 2025-09-20 10:05:54.372623 | orchestrator | 10:05:54.372 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-20 10:05:54.372671 | orchestrator | 10:05:54.372 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.372696 | orchestrator | 10:05:54.372 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.372724 | orchestrator | 10:05:54.372 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.372758 | orchestrator | 10:05:54.372 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.372780 | orchestrator | 10:05:54.372 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.372806 | orchestrator | 10:05:54.372 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.372821 | orchestrator | 10:05:54.372 STDOUT terraform:  } 2025-09-20 10:05:54.372869 | orchestrator | 10:05:54.372 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-20 10:05:54.372918 | orchestrator | 10:05:54.372 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.372944 | orchestrator | 10:05:54.372 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.372972 | orchestrator | 10:05:54.372 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.372998 | orchestrator | 10:05:54.372 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.373026 | orchestrator | 10:05:54.372 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.373053 | orchestrator | 10:05:54.373 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.373067 | orchestrator | 10:05:54.373 STDOUT terraform:  } 2025-09-20 10:05:54.373115 | orchestrator | 10:05:54.373 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-20 10:05:54.373166 | orchestrator | 10:05:54.373 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.373192 | orchestrator | 10:05:54.373 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.373219 | orchestrator | 10:05:54.373 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.373247 | orchestrator | 10:05:54.373 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.373275 | orchestrator | 10:05:54.373 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.373301 | orchestrator | 10:05:54.373 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.373308 | orchestrator | 10:05:54.373 STDOUT terraform:  } 2025-09-20 10:05:54.373369 | orchestrator | 10:05:54.373 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-20 10:05:54.373413 | orchestrator | 10:05:54.373 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.373440 | orchestrator | 10:05:54.373 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.373467 | orchestrator | 10:05:54.373 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.373495 | orchestrator | 10:05:54.373 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.373523 | orchestrator | 10:05:54.373 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.373551 | orchestrator | 10:05:54.373 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.373556 | orchestrator | 10:05:54.373 STDOUT terraform:  } 2025-09-20 10:05:54.373606 | orchestrator | 10:05:54.373 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-20 10:05:54.373655 | orchestrator | 10:05:54.373 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.373681 | orchestrator | 10:05:54.373 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.373709 | orchestrator | 10:05:54.373 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.373736 | orchestrator | 10:05:54.373 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.373764 | orchestrator | 10:05:54.373 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.373790 | orchestrator | 10:05:54.373 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.373796 | orchestrator | 10:05:54.373 STDOUT terraform:  } 2025-09-20 10:05:54.373850 | orchestrator | 10:05:54.373 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-20 10:05:54.373895 | orchestrator | 10:05:54.373 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.373922 | orchestrator | 10:05:54.373 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.373950 | orchestrator | 10:05:54.373 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.373978 | orchestrator | 10:05:54.373 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.374006 | orchestrator | 10:05:54.373 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.377022 | orchestrator | 10:05:54.374 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.377051 | orchestrator | 10:05:54.376 STDOUT terraform:  } 2025-09-20 10:05:54.377056 | orchestrator | 10:05:54.376 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-20 10:05:54.377060 | orchestrator | 10:05:54.376 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 10:05:54.377071 | orchestrator | 10:05:54.376 STDOUT terraform:  + device = (known after apply) 2025-09-20 10:05:54.377075 | orchestrator | 10:05:54.376 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.377079 | orchestrator | 10:05:54.376 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 10:05:54.377083 | orchestrator | 10:05:54.376 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.377086 | orchestrator | 10:05:54.376 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 10:05:54.377090 | orchestrator | 10:05:54.376 STDOUT terraform:  } 2025-09-20 10:05:54.377096 | orchestrator | 10:05:54.376 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-20 10:05:54.377102 | orchestrator | 10:05:54.376 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-20 10:05:54.377106 | orchestrator | 10:05:54.376 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-20 10:05:54.377110 | orchestrator | 10:05:54.376 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-20 10:05:54.377113 | orchestrator | 10:05:54.376 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.377117 | orchestrator | 10:05:54.376 STDOUT terraform:  + port_id = (known after apply) 2025-09-20 10:05:54.377121 | orchestrator | 10:05:54.376 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.377124 | orchestrator | 10:05:54.376 STDOUT terraform:  } 2025-09-20 10:05:54.377133 | orchestrator | 10:05:54.376 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-20 10:05:54.377137 | orchestrator | 10:05:54.377 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-20 10:05:54.377141 | orchestrator | 10:05:54.377 STDOUT terraform:  + address = (known after apply) 2025-09-20 10:05:54.377145 | orchestrator | 10:05:54.377 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.377148 | orchestrator | 10:05:54.377 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-20 10:05:54.377154 | orchestrator | 10:05:54.377 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.377177 | orchestrator | 10:05:54.377 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-20 10:05:54.377205 | orchestrator | 10:05:54.377 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.377211 | orchestrator | 10:05:54.377 STDOUT terraform:  + pool = "public" 2025-09-20 10:05:54.377256 | orchestrator | 10:05:54.377 STDOUT terraform:  + port_id = (known after apply) 2025-09-20 10:05:54.377262 | orchestrator | 10:05:54.377 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.377280 | orchestrator | 10:05:54.377 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.377294 | orchestrator | 10:05:54.377 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.377318 | orchestrator | 10:05:54.377 STDOUT terraform:  } 2025-09-20 10:05:54.377372 | orchestrator | 10:05:54.377 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-20 10:05:54.377437 | orchestrator | 10:05:54.377 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-20 10:05:54.377445 | orchestrator | 10:05:54.377 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.377533 | orchestrator | 10:05:54.377 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.377540 | orchestrator | 10:05:54.377 STDOUT terraform:  + availability_zone_hints = [ 2025-09-20 10:05:54.377544 | orchestrator | 10:05:54.377 STDOUT terraform:  + "nova", 2025-09-20 10:05:54.377547 | orchestrator | 10:05:54.377 STDOUT terraform:  ] 2025-09-20 10:05:54.377553 | orchestrator | 10:05:54.377 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-20 10:05:54.377585 | orchestrator | 10:05:54.377 STDOUT terraform:  + external = (known after apply) 2025-09-20 10:05:54.377620 | orchestrator | 10:05:54.377 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.377656 | orchestrator | 10:05:54.377 STDOUT terraform:  + mtu = (known after apply) 2025-09-20 10:05:54.377693 | orchestrator | 10:05:54.377 STDOUT terraform:  + name = "net-testbed-management" 2025-09-20 10:05:54.377727 | orchestrator | 10:05:54.377 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.377772 | orchestrator | 10:05:54.377 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.377807 | orchestrator | 10:05:54.377 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.377847 | orchestrator | 10:05:54.377 STDOUT terraform:  + shared = (known after apply) 2025-09-20 10:05:54.377879 | orchestrator | 10:05:54.377 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.377913 | orchestrator | 10:05:54.377 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-20 10:05:54.377938 | orchestrator | 10:05:54.377 STDOUT terraform:  + segments (known after apply) 2025-09-20 10:05:54.377944 | orchestrator | 10:05:54.377 STDOUT terraform:  } 2025-09-20 10:05:54.377992 | orchestrator | 10:05:54.377 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-20 10:05:54.378057 | orchestrator | 10:05:54.377 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-20 10:05:54.378087 | orchestrator | 10:05:54.378 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.378122 | orchestrator | 10:05:54.378 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.378155 | orchestrator | 10:05:54.378 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.378190 | orchestrator | 10:05:54.378 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.378225 | orchestrator | 10:05:54.378 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.378262 | orchestrator | 10:05:54.378 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.378297 | orchestrator | 10:05:54.378 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.378332 | orchestrator | 10:05:54.378 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.378377 | orchestrator | 10:05:54.378 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.378411 | orchestrator | 10:05:54.378 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.378446 | orchestrator | 10:05:54.378 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.378479 | orchestrator | 10:05:54.378 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.378515 | orchestrator | 10:05:54.378 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.378551 | orchestrator | 10:05:54.378 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.378586 | orchestrator | 10:05:54.378 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.378622 | orchestrator | 10:05:54.378 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.378641 | orchestrator | 10:05:54.378 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.378668 | orchestrator | 10:05:54.378 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.378675 | orchestrator | 10:05:54.378 STDOUT terraform:  } 2025-09-20 10:05:54.378701 | orchestrator | 10:05:54.378 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.378728 | orchestrator | 10:05:54.378 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.378735 | orchestrator | 10:05:54.378 STDOUT terraform:  } 2025-09-20 10:05:54.378759 | orchestrator | 10:05:54.378 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.378766 | orchestrator | 10:05:54.378 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.378792 | orchestrator | 10:05:54.378 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-20 10:05:54.378822 | orchestrator | 10:05:54.378 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.378828 | orchestrator | 10:05:54.378 STDOUT terraform:  } 2025-09-20 10:05:54.382132 | orchestrator | 10:05:54.378 STDOUT terraform:  } 2025-09-20 10:05:54.382158 | orchestrator | 10:05:54.378 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-20 10:05:54.382163 | orchestrator | 10:05:54.378 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 10:05:54.382167 | orchestrator | 10:05:54.378 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.382172 | orchestrator | 10:05:54.378 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.382176 | orchestrator | 10:05:54.378 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.382179 | orchestrator | 10:05:54.379 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.382183 | orchestrator | 10:05:54.379 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.382198 | orchestrator | 10:05:54.379 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.382202 | orchestrator | 10:05:54.379 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.382216 | orchestrator | 10:05:54.379 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.382220 | orchestrator | 10:05:54.379 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.382230 | orchestrator | 10:05:54.379 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.382234 | orchestrator | 10:05:54.379 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.382238 | orchestrator | 10:05:54.379 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.382241 | orchestrator | 10:05:54.379 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.382245 | orchestrator | 10:05:54.379 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.382249 | orchestrator | 10:05:54.379 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.382253 | orchestrator | 10:05:54.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.382256 | orchestrator | 10:05:54.379 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382260 | orchestrator | 10:05:54.379 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.382264 | orchestrator | 10:05:54.379 STDOUT terraform:  } 2025-09-20 10:05:54.382268 | orchestrator | 10:05:54.379 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382272 | orchestrator | 10:05:54.379 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 10:05:54.382275 | orchestrator | 10:05:54.379 STDOUT terraform:  } 2025-09-20 10:05:54.382279 | orchestrator | 10:05:54.379 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382283 | orchestrator | 10:05:54.379 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.382287 | orchestrator | 10:05:54.379 STDOUT terraform:  } 2025-09-20 10:05:54.382291 | orchestrator | 10:05:54.379 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382295 | orchestrator | 10:05:54.379 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 10:05:54.382298 | orchestrator | 10:05:54.379 STDOUT terraform:  } 2025-09-20 10:05:54.382302 | orchestrator | 10:05:54.379 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.382306 | orchestrator | 10:05:54.379 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.382310 | orchestrator | 10:05:54.379 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-20 10:05:54.382314 | orchestrator | 10:05:54.379 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.382317 | orchestrator | 10:05:54.379 STDOUT terraform:  } 2025-09-20 10:05:54.382321 | orchestrator | 10:05:54.379 STDOUT terraform:  } 2025-09-20 10:05:54.382325 | orchestrator | 10:05:54.379 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-20 10:05:54.382333 | orchestrator | 10:05:54.379 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 10:05:54.382340 | orchestrator | 10:05:54.379 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.382344 | orchestrator | 10:05:54.379 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.382369 | orchestrator | 10:05:54.379 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.382377 | orchestrator | 10:05:54.379 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.382381 | orchestrator | 10:05:54.379 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.382384 | orchestrator | 10:05:54.379 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.382388 | orchestrator | 10:05:54.379 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.382392 | orchestrator | 10:05:54.379 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.382395 | orchestrator | 10:05:54.380 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.382399 | orchestrator | 10:05:54.380 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.382403 | orchestrator | 10:05:54.380 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.382407 | orchestrator | 10:05:54.380 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.382410 | orchestrator | 10:05:54.380 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.382414 | orchestrator | 10:05:54.380 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.382418 | orchestrator | 10:05:54.380 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.382422 | orchestrator | 10:05:54.380 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.382425 | orchestrator | 10:05:54.380 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382429 | orchestrator | 10:05:54.380 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.382433 | orchestrator | 10:05:54.380 STDOUT terraform:  } 2025-09-20 10:05:54.382437 | orchestrator | 10:05:54.380 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382441 | orchestrator | 10:05:54.380 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 10:05:54.382445 | orchestrator | 10:05:54.380 STDOUT terraform:  } 2025-09-20 10:05:54.382448 | orchestrator | 10:05:54.380 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382452 | orchestrator | 10:05:54.380 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.382456 | orchestrator | 10:05:54.380 STDOUT terraform:  } 2025-09-20 10:05:54.382460 | orchestrator | 10:05:54.380 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382463 | orchestrator | 10:05:54.380 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 10:05:54.382467 | orchestrator | 10:05:54.380 STDOUT terraform:  } 2025-09-20 10:05:54.382471 | orchestrator | 10:05:54.380 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.382475 | orchestrator | 10:05:54.380 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.382482 | orchestrator | 10:05:54.380 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-20 10:05:54.382486 | orchestrator | 10:05:54.380 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.382490 | orchestrator | 10:05:54.380 STDOUT terraform:  } 2025-09-20 10:05:54.382493 | orchestrator | 10:05:54.380 STDOUT terraform:  } 2025-09-20 10:05:54.382497 | orchestrator | 10:05:54.380 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-20 10:05:54.382501 | orchestrator | 10:05:54.380 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 10:05:54.382505 | orchestrator | 10:05:54.380 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.382508 | orchestrator | 10:05:54.380 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.382515 | orchestrator | 10:05:54.380 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.382519 | orchestrator | 10:05:54.380 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.382523 | orchestrator | 10:05:54.380 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.382527 | orchestrator | 10:05:54.380 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.382530 | orchestrator | 10:05:54.380 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.382534 | orchestrator | 10:05:54.380 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.382538 | orchestrator | 10:05:54.380 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.382542 | orchestrator | 10:05:54.380 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.382545 | orchestrator | 10:05:54.380 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.382549 | orchestrator | 10:05:54.380 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.382553 | orchestrator | 10:05:54.380 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.382556 | orchestrator | 10:05:54.381 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.382560 | orchestrator | 10:05:54.381 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.382564 | orchestrator | 10:05:54.381 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.382568 | orchestrator | 10:05:54.381 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382572 | orchestrator | 10:05:54.381 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.382575 | orchestrator | 10:05:54.381 STDOUT terraform:  } 2025-09-20 10:05:54.382579 | orchestrator | 10:05:54.381 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382583 | orchestrator | 10:05:54.381 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 10:05:54.382587 | orchestrator | 10:05:54.381 STDOUT terraform:  } 2025-09-20 10:05:54.382590 | orchestrator | 10:05:54.381 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382594 | orchestrator | 10:05:54.381 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.382604 | orchestrator | 10:05:54.381 STDOUT terraform:  } 2025-09-20 10:05:54.382608 | orchestrator | 10:05:54.381 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.382612 | orchestrator | 10:05:54.381 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 10:05:54.382616 | orchestrator | 10:05:54.381 STDOUT terraform:  } 2025-09-20 10:05:54.382619 | orchestrator | 10:05:54.381 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.382623 | orchestrator | 10:05:54.381 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.382627 | orchestrator | 10:05:54.381 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-20 10:05:54.382631 | orchestrator | 10:05:54.381 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.382635 | orchestrator | 10:05:54.381 STDOUT terraform:  } 2025-09-20 10:05:54.382638 | orchestrator | 10:05:54.381 STDOUT terraform:  } 2025-09-20 10:05:54.382642 | orchestrator | 10:05:54.381 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-20 10:05:54.382646 | orchestrator | 10:05:54.381 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 10:05:54.382650 | orchestrator | 10:05:54.381 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.382654 | orchestrator | 10:05:54.381 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.382658 | orchestrator | 10:05:54.381 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.382661 | orchestrator | 10:05:54.381 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.382667 | orchestrator | 10:05:54.381 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.382671 | orchestrator | 10:05:54.381 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.382677 | orchestrator | 10:05:54.381 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.382681 | orchestrator | 10:05:54.381 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.382687 | orchestrator | 10:05:54.381 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.382691 | orchestrator | 10:05:54.381 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.382695 | orchestrator | 10:05:54.381 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.382699 | orchestrator | 10:05:54.381 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.382702 | orchestrator | 10:05:54.381 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.382706 | orchestrator | 10:05:54.381 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.382710 | orchestrator | 10:05:54.381 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.382922 | orchestrator | 10:05:54.381 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.383536 | orchestrator | 10:05:54.382 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.383563 | orchestrator | 10:05:54.383 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.383704 | orchestrator | 10:05:54.383 STDOUT terraform:  } 2025-09-20 10:05:54.385069 | orchestrator | 10:05:54.383 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.385542 | orchestrator | 10:05:54.385 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 10:05:54.385816 | orchestrator | 10:05:54.385 STDOUT terraform:  } 2025-09-20 10:05:54.385983 | orchestrator | 10:05:54.385 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.386419 | orchestrator | 10:05:54.385 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.386441 | orchestrator | 10:05:54.386 STDOUT terraform:  } 2025-09-20 10:05:54.386645 | orchestrator | 10:05:54.386 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.387219 | orchestrator | 10:05:54.386 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 10:05:54.387240 | orchestrator | 10:05:54.387 STDOUT terraform:  } 2025-09-20 10:05:54.387807 | orchestrator | 10:05:54.387 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.387815 | orchestrator | 10:05:54.387 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.387842 | orchestrator | 10:05:54.387 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-20 10:05:54.387882 | orchestrator | 10:05:54.387 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.387888 | orchestrator | 10:05:54.387 STDOUT terraform:  } 2025-09-20 10:05:54.387905 | orchestrator | 10:05:54.387 STDOUT terraform:  } 2025-09-20 10:05:54.387953 | orchestrator | 10:05:54.387 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-20 10:05:54.387998 | orchestrator | 10:05:54.387 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 10:05:54.388032 | orchestrator | 10:05:54.387 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.388067 | orchestrator | 10:05:54.388 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.388101 | orchestrator | 10:05:54.388 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.388136 | orchestrator | 10:05:54.388 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.388176 | orchestrator | 10:05:54.388 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.388224 | orchestrator | 10:05:54.388 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.388260 | orchestrator | 10:05:54.388 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.388294 | orchestrator | 10:05:54.388 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.388331 | orchestrator | 10:05:54.388 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.388384 | orchestrator | 10:05:54.388 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.388419 | orchestrator | 10:05:54.388 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.388460 | orchestrator | 10:05:54.388 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.388491 | orchestrator | 10:05:54.388 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.388528 | orchestrator | 10:05:54.388 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.388564 | orchestrator | 10:05:54.388 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.388599 | orchestrator | 10:05:54.388 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.388618 | orchestrator | 10:05:54.388 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.388645 | orchestrator | 10:05:54.388 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.388652 | orchestrator | 10:05:54.388 STDOUT terraform:  } 2025-09-20 10:05:54.388675 | orchestrator | 10:05:54.388 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.388704 | orchestrator | 10:05:54.388 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 10:05:54.388710 | orchestrator | 10:05:54.388 STDOUT terraform:  } 2025-09-20 10:05:54.388732 | orchestrator | 10:05:54.388 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.388759 | orchestrator | 10:05:54.388 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.388765 | orchestrator | 10:05:54.388 STDOUT terraform:  } 2025-09-20 10:05:54.388787 | orchestrator | 10:05:54.388 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.388815 | orchestrator | 10:05:54.388 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 10:05:54.388821 | orchestrator | 10:05:54.388 STDOUT terraform:  } 2025-09-20 10:05:54.388850 | orchestrator | 10:05:54.388 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.388867 | orchestrator | 10:05:54.388 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.388891 | orchestrator | 10:05:54.388 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-20 10:05:54.388920 | orchestrator | 10:05:54.388 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.388927 | orchestrator | 10:05:54.388 STDOUT terraform:  } 2025-09-20 10:05:54.388942 | orchestrator | 10:05:54.388 STDOUT terraform:  } 2025-09-20 10:05:54.388987 | orchestrator | 10:05:54.388 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-20 10:05:54.389030 | orchestrator | 10:05:54.388 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 10:05:54.389065 | orchestrator | 10:05:54.389 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.389100 | orchestrator | 10:05:54.389 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 10:05:54.389134 | orchestrator | 10:05:54.389 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 10:05:54.389169 | orchestrator | 10:05:54.389 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.389205 | orchestrator | 10:05:54.389 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 10:05:54.389240 | orchestrator | 10:05:54.389 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 10:05:54.389275 | orchestrator | 10:05:54.389 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 10:05:54.389310 | orchestrator | 10:05:54.389 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 10:05:54.389346 | orchestrator | 10:05:54.389 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.389389 | orchestrator | 10:05:54.389 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 10:05:54.389423 | orchestrator | 10:05:54.389 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.389457 | orchestrator | 10:05:54.389 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 10:05:54.389491 | orchestrator | 10:05:54.389 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 10:05:54.389526 | orchestrator | 10:05:54.389 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.389561 | orchestrator | 10:05:54.389 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 10:05:54.389596 | orchestrator | 10:05:54.389 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.389624 | orchestrator | 10:05:54.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.389651 | orchestrator | 10:05:54.389 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 10:05:54.389658 | orchestrator | 10:05:54.389 STDOUT terraform:  } 2025-09-20 10:05:54.389679 | orchestrator | 10:05:54.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.389707 | orchestrator | 10:05:54.389 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 10:05:54.389714 | orchestrator | 10:05:54.389 STDOUT terraform:  } 2025-09-20 10:05:54.389735 | orchestrator | 10:05:54.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.389762 | orchestrator | 10:05:54.389 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 10:05:54.389769 | orchestrator | 10:05:54.389 STDOUT terraform:  } 2025-09-20 10:05:54.389791 | orchestrator | 10:05:54.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 10:05:54.389818 | orchestrator | 10:05:54.389 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 10:05:54.389824 | orchestrator | 10:05:54.389 STDOUT terraform:  } 2025-09-20 10:05:54.389849 | orchestrator | 10:05:54.389 STDOUT terraform:  + binding (known after apply) 2025-09-20 10:05:54.389855 | orchestrator | 10:05:54.389 STDOUT terraform:  + fixed_ip { 2025-09-20 10:05:54.389883 | orchestrator | 10:05:54.389 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-20 10:05:54.389912 | orchestrator | 10:05:54.389 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.389918 | orchestrator | 10:05:54.389 STDOUT terraform:  } 2025-09-20 10:05:54.389933 | orchestrator | 10:05:54.389 STDOUT terraform:  } 2025-09-20 10:05:54.389981 | orchestrator | 10:05:54.389 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-20 10:05:54.390042 | orchestrator | 10:05:54.389 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-20 10:05:54.390060 | orchestrator | 10:05:54.390 STDOUT terraform:  + force_destroy = false 2025-09-20 10:05:54.390089 | orchestrator | 10:05:54.390 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.390117 | orchestrator | 10:05:54.390 STDOUT terraform:  + port_id = (known after apply) 2025-09-20 10:05:54.390144 | orchestrator | 10:05:54.390 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.390173 | orchestrator | 10:05:54.390 STDOUT terraform:  + router_id = (known after apply) 2025-09-20 10:05:54.390200 | orchestrator | 10:05:54.390 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 10:05:54.390206 | orchestrator | 10:05:54.390 STDOUT terraform:  } 2025-09-20 10:05:54.390243 | orchestrator | 10:05:54.390 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-20 10:05:54.390277 | orchestrator | 10:05:54.390 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-20 10:05:54.390313 | orchestrator | 10:05:54.390 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 10:05:54.390359 | orchestrator | 10:05:54.390 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.390381 | orchestrator | 10:05:54.390 STDOUT terraform:  + availability_zone_hints = [ 2025-09-20 10:05:54.390389 | orchestrator | 10:05:54.390 STDOUT terraform:  + "nova", 2025-09-20 10:05:54.390403 | orchestrator | 10:05:54.390 STDOUT terraform:  ] 2025-09-20 10:05:54.390439 | orchestrator | 10:05:54.390 STDOUT terraform:  + distributed = (known after apply) 2025-09-20 10:05:54.390474 | orchestrator | 10:05:54.390 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-20 10:05:54.390524 | orchestrator | 10:05:54.390 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-20 10:05:54.390565 | orchestrator | 10:05:54.390 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-20 10:05:54.390595 | orchestrator | 10:05:54.390 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.390622 | orchestrator | 10:05:54.390 STDOUT terraform:  + name = "testbed" 2025-09-20 10:05:54.390662 | orchestrator | 10:05:54.390 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.390696 | orchestrator | 10:05:54.390 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.390724 | orchestrator | 10:05:54.390 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-20 10:05:54.390730 | orchestrator | 10:05:54.390 STDOUT terraform:  } 2025-09-20 10:05:54.390784 | orchestrator | 10:05:54.390 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-20 10:05:54.390836 | orchestrator | 10:05:54.390 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-20 10:05:54.390860 | orchestrator | 10:05:54.390 STDOUT terraform:  + description = "ssh" 2025-09-20 10:05:54.390889 | orchestrator | 10:05:54.390 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.390913 | orchestrator | 10:05:54.390 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.390949 | orchestrator | 10:05:54.390 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.390973 | orchestrator | 10:05:54.390 STDOUT terraform:  + port_range_max = 22 2025-09-20 10:05:54.390996 | orchestrator | 10:05:54.390 STDOUT terraform:  + port_range_min = 22 2025-09-20 10:05:54.391020 | orchestrator | 10:05:54.390 STDOUT terraform:  + protocol = "tcp" 2025-09-20 10:05:54.391055 | orchestrator | 10:05:54.391 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.391089 | orchestrator | 10:05:54.391 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.391124 | orchestrator | 10:05:54.391 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.391152 | orchestrator | 10:05:54.391 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.391187 | orchestrator | 10:05:54.391 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.391223 | orchestrator | 10:05:54.391 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.391230 | orchestrator | 10:05:54.391 STDOUT terraform:  } 2025-09-20 10:05:54.391283 | orchestrator | 10:05:54.391 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-20 10:05:54.391336 | orchestrator | 10:05:54.391 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-20 10:05:54.391390 | orchestrator | 10:05:54.391 STDOUT terraform:  + description = "wireguard" 2025-09-20 10:05:54.391433 | orchestrator | 10:05:54.391 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.391468 | orchestrator | 10:05:54.391 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.391506 | orchestrator | 10:05:54.391 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.391532 | orchestrator | 10:05:54.391 STDOUT terraform:  + port_range_max = 51820 2025-09-20 10:05:54.391556 | orchestrator | 10:05:54.391 STDOUT terraform:  + port_range_min = 51820 2025-09-20 10:05:54.391580 | orchestrator | 10:05:54.391 STDOUT terraform:  + protocol = "udp" 2025-09-20 10:05:54.391615 | orchestrator | 10:05:54.391 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.391649 | orchestrator | 10:05:54.391 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.391684 | orchestrator | 10:05:54.391 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.391713 | orchestrator | 10:05:54.391 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.391751 | orchestrator | 10:05:54.391 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.391784 | orchestrator | 10:05:54.391 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.391790 | orchestrator | 10:05:54.391 STDOUT terraform:  } 2025-09-20 10:05:54.391844 | orchestrator | 10:05:54.391 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-20 10:05:54.391896 | orchestrator | 10:05:54.391 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-20 10:05:54.391924 | orchestrator | 10:05:54.391 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.391949 | orchestrator | 10:05:54.391 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.391984 | orchestrator | 10:05:54.391 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.392009 | orchestrator | 10:05:54.391 STDOUT terraform:  + protocol = "tcp" 2025-09-20 10:05:54.392045 | orchestrator | 10:05:54.392 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.392079 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.392114 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.392149 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-20 10:05:54.392185 | orchestrator | 10:05:54.392 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.392223 | orchestrator | 10:05:54.392 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.392229 | orchestrator | 10:05:54.392 STDOUT terraform:  } 2025-09-20 10:05:54.392282 | orchestrator | 10:05:54.392 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-20 10:05:54.392333 | orchestrator | 10:05:54.392 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-20 10:05:54.392374 | orchestrator | 10:05:54.392 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.392397 | orchestrator | 10:05:54.392 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.392441 | orchestrator | 10:05:54.392 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.392477 | orchestrator | 10:05:54.392 STDOUT terraform:  + protocol = "udp" 2025-09-20 10:05:54.392535 | orchestrator | 10:05:54.392 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.392587 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.392636 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.392670 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-20 10:05:54.392706 | orchestrator | 10:05:54.392 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.392742 | orchestrator | 10:05:54.392 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.392748 | orchestrator | 10:05:54.392 STDOUT terraform:  } 2025-09-20 10:05:54.392805 | orchestrator | 10:05:54.392 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-20 10:05:54.392857 | orchestrator | 10:05:54.392 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-20 10:05:54.392886 | orchestrator | 10:05:54.392 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.392911 | orchestrator | 10:05:54.392 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.392968 | orchestrator | 10:05:54.392 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.392973 | orchestrator | 10:05:54.392 STDOUT terraform:  + protocol = "icmp" 2025-09-20 10:05:54.401676 | orchestrator | 10:05:54.392 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.402181 | orchestrator | 10:05:54.392 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.402322 | orchestrator | 10:05:54.393 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.402470 | orchestrator | 10:05:54.393 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.402529 | orchestrator | 10:05:54.393 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.402586 | orchestrator | 10:05:54.393 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.402648 | orchestrator | 10:05:54.393 STDOUT terraform:  } 2025-09-20 10:05:54.402703 | orchestrator | 10:05:54.393 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-20 10:05:54.402773 | orchestrator | 10:05:54.393 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-20 10:05:54.402889 | orchestrator | 10:05:54.393 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.402915 | orchestrator | 10:05:54.393 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.402979 | orchestrator | 10:05:54.393 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.403014 | orchestrator | 10:05:54.393 STDOUT terraform:  + protocol = "tcp" 2025-09-20 10:05:54.403071 | orchestrator | 10:05:54.393 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.403115 | orchestrator | 10:05:54.393 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.403137 | orchestrator | 10:05:54.393 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.403198 | orchestrator | 10:05:54.393 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.403226 | orchestrator | 10:05:54.393 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.403296 | orchestrator | 10:05:54.393 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.403334 | orchestrator | 10:05:54.393 STDOUT terraform:  } 2025-09-20 10:05:54.403483 | orchestrator | 10:05:54.393 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-20 10:05:54.404048 | orchestrator | 10:05:54.393 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-20 10:05:54.404344 | orchestrator | 10:05:54.404 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.404770 | orchestrator | 10:05:54.404 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.405313 | orchestrator | 10:05:54.404 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.405643 | orchestrator | 10:05:54.405 STDOUT terraform:  + protocol = "udp" 2025-09-20 10:05:54.405864 | orchestrator | 10:05:54.405 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.406114 | orchestrator | 10:05:54.405 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.406389 | orchestrator | 10:05:54.406 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.406751 | orchestrator | 10:05:54.406 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.407386 | orchestrator | 10:05:54.406 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.407914 | orchestrator | 10:05:54.407 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.410315 | orchestrator | 10:05:54.407 STDOUT terraform:  } 2025-09-20 10:05:54.410339 | orchestrator | 10:05:54.407 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-20 10:05:54.410344 | orchestrator | 10:05:54.408 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-20 10:05:54.410371 | orchestrator | 10:05:54.408 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.410376 | orchestrator | 10:05:54.408 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.410380 | orchestrator | 10:05:54.408 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.410384 | orchestrator | 10:05:54.408 STDOUT terraform:  + protocol = "icmp" 2025-09-20 10:05:54.410388 | orchestrator | 10:05:54.408 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.410392 | orchestrator | 10:05:54.408 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.410396 | orchestrator | 10:05:54.408 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.410400 | orchestrator | 10:05:54.408 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.410404 | orchestrator | 10:05:54.408 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.410407 | orchestrator | 10:05:54.408 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.410411 | orchestrator | 10:05:54.408 STDOUT terraform:  } 2025-09-20 10:05:54.410415 | orchestrator | 10:05:54.408 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-20 10:05:54.410419 | orchestrator | 10:05:54.408 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-20 10:05:54.410423 | orchestrator | 10:05:54.408 STDOUT terraform:  + description = "vrrp" 2025-09-20 10:05:54.410427 | orchestrator | 10:05:54.408 STDOUT terraform:  + direction = "ingress" 2025-09-20 10:05:54.410431 | orchestrator | 10:05:54.408 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 10:05:54.410454 | orchestrator | 10:05:54.408 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.410459 | orchestrator | 10:05:54.408 STDOUT terraform:  + protocol = "112" 2025-09-20 10:05:54.410463 | orchestrator | 10:05:54.408 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.410466 | orchestrator | 10:05:54.408 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 10:05:54.410470 | orchestrator | 10:05:54.408 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 10:05:54.410474 | orchestrator | 10:05:54.408 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 10:05:54.410485 | orchestrator | 10:05:54.408 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 10:05:54.410489 | orchestrator | 10:05:54.408 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.410492 | orchestrator | 10:05:54.408 STDOUT terraform:  } 2025-09-20 10:05:54.410496 | orchestrator | 10:05:54.408 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-20 10:05:54.410500 | orchestrator | 10:05:54.408 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-20 10:05:54.410504 | orchestrator | 10:05:54.408 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.410508 | orchestrator | 10:05:54.408 STDOUT terraform:  + description = "management security group" 2025-09-20 10:05:54.410512 | orchestrator | 10:05:54.408 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.410515 | orchestrator | 10:05:54.409 STDOUT terraform:  + name = "testbed-management" 2025-09-20 10:05:54.410519 | orchestrator | 10:05:54.409 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.410639 | orchestrator | 10:05:54.409 STDOUT terraform:  + stateful = (known after apply) 2025-09-20 10:05:54.410698 | orchestrator | 10:05:54.409 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.410769 | orchestrator | 10:05:54.409 STDOUT terraform:  } 2025-09-20 10:05:54.410846 | orchestrator | 10:05:54.409 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-20 10:05:54.410884 | orchestrator | 10:05:54.409 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-20 10:05:54.411097 | orchestrator | 10:05:54.409 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.411274 | orchestrator | 10:05:54.409 STDOUT terraform:  + description = "node security group" 2025-09-20 10:05:54.411336 | orchestrator | 10:05:54.409 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.411461 | orchestrator | 10:05:54.409 STDOUT terraform:  + name = "testbed-node" 2025-09-20 10:05:54.411484 | orchestrator | 10:05:54.409 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.411560 | orchestrator | 10:05:54.409 STDOUT terraform:  + stateful = (known after apply) 2025-09-20 10:05:54.411709 | orchestrator | 10:05:54.409 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.411822 | orchestrator | 10:05:54.409 STDOUT terraform:  } 2025-09-20 10:05:54.411889 | orchestrator | 10:05:54.409 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-20 10:05:54.411958 | orchestrator | 10:05:54.409 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-20 10:05:54.412071 | orchestrator | 10:05:54.409 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 10:05:54.412144 | orchestrator | 10:05:54.409 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-20 10:05:54.412258 | orchestrator | 10:05:54.409 STDOUT terraform:  + dns_nameservers = [ 2025-09-20 10:05:54.412359 | orchestrator | 10:05:54.409 STDOUT terraform:  + "8.8.8.8", 2025-09-20 10:05:54.412421 | orchestrator | 10:05:54.409 STDOUT terraform:  + "9.9.9.9", 2025-09-20 10:05:54.412539 | orchestrator | 10:05:54.409 STDOUT terraform:  ] 2025-09-20 10:05:54.412636 | orchestrator | 10:05:54.409 STDOUT terraform:  + enable_dhcp = true 2025-09-20 10:05:54.412691 | orchestrator | 10:05:54.409 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-20 10:05:54.412779 | orchestrator | 10:05:54.409 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.412836 | orchestrator | 10:05:54.409 STDOUT terraform:  + ip_version = 4 2025-09-20 10:05:54.412914 | orchestrator | 10:05:54.409 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-20 10:05:54.412960 | orchestrator | 10:05:54.409 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-20 10:05:54.413085 | orchestrator | 10:05:54.409 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-20 10:05:54.413105 | orchestrator | 10:05:54.409 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 10:05:54.413306 | orchestrator | 10:05:54.409 STDOUT terraform:  + no_gateway = false 2025-09-20 10:05:54.413377 | orchestrator | 10:05:54.409 STDOUT terraform:  + region = (known after apply) 2025-09-20 10:05:54.413455 | orchestrator | 10:05:54.409 STDOUT terraform:  + service_types = (known after apply) 2025-09-20 10:05:54.413550 | orchestrator | 10:05:54.409 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 10:05:54.413627 | orchestrator | 10:05:54.409 STDOUT terraform:  + allocation_pool { 2025-09-20 10:05:54.413775 | orchestrator | 10:05:54.409 STDOUT terraform:  + end = "192.168.31.250" 2025-09-20 10:05:54.413890 | orchestrator | 10:05:54.409 STDOUT terraform:  + start = "192.168.31.200" 2025-09-20 10:05:54.413958 | orchestrator | 10:05:54.409 STDOUT terraform:  } 2025-09-20 10:05:54.414124 | orchestrator | 10:05:54.409 STDOUT terraform:  } 2025-09-20 10:05:54.414190 | orchestrator | 10:05:54.409 STDOUT terraform:  # terraform_data.image will be created 2025-09-20 10:05:54.414454 | orchestrator | 10:05:54.409 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-20 10:05:54.414466 | orchestrator | 10:05:54.409 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.414470 | orchestrator | 10:05:54.409 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-20 10:05:54.414474 | orchestrator | 10:05:54.410 STDOUT terraform:  + output = (known after apply) 2025-09-20 10:05:54.414478 | orchestrator | 10:05:54.410 STDOUT terraform:  } 2025-09-20 10:05:54.414482 | orchestrator | 10:05:54.410 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-20 10:05:54.414490 | orchestrator | 10:05:54.410 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-20 10:05:54.414494 | orchestrator | 10:05:54.410 STDOUT terraform:  + id = (known after apply) 2025-09-20 10:05:54.414498 | orchestrator | 10:05:54.410 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-20 10:05:54.414501 | orchestrator | 10:05:54.410 STDOUT terraform:  + output = (known after apply) 2025-09-20 10:05:54.414505 | orchestrator | 10:05:54.410 STDOUT terraform:  } 2025-09-20 10:05:54.414509 | orchestrator | 10:05:54.410 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-20 10:05:54.414513 | orchestrator | 10:05:54.410 STDOUT terraform: Changes to Outputs: 2025-09-20 10:05:54.414523 | orchestrator | 10:05:54.410 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-20 10:05:54.414527 | orchestrator | 10:05:54.410 STDOUT terraform:  + private_key = (sensitive value) 2025-09-20 10:05:54.447878 | orchestrator | 10:05:54.446 STDOUT terraform: terraform_data.image: Creating... 2025-09-20 10:05:54.447914 | orchestrator | 10:05:54.446 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=a22b0dcd-155e-816d-9ea5-446e25781581] 2025-09-20 10:05:54.550614 | orchestrator | 10:05:54.550 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-20 10:05:54.551255 | orchestrator | 10:05:54.551 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=a99c50b6-fefc-3f71-df2c-b71e3e259772] 2025-09-20 10:05:54.568666 | orchestrator | 10:05:54.568 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-20 10:05:54.578514 | orchestrator | 10:05:54.578 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-20 10:05:54.582375 | orchestrator | 10:05:54.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-20 10:05:54.594075 | orchestrator | 10:05:54.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-20 10:05:54.595037 | orchestrator | 10:05:54.593 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-20 10:05:54.595136 | orchestrator | 10:05:54.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-20 10:05:54.595422 | orchestrator | 10:05:54.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-20 10:05:54.595511 | orchestrator | 10:05:54.593 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-20 10:05:54.597335 | orchestrator | 10:05:54.597 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-20 10:05:54.609005 | orchestrator | 10:05:54.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-20 10:05:55.033160 | orchestrator | 10:05:55.032 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-20 10:05:55.056448 | orchestrator | 10:05:55.055 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-20 10:05:55.084533 | orchestrator | 10:05:55.084 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-20 10:05:55.090693 | orchestrator | 10:05:55.090 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-20 10:05:55.190275 | orchestrator | 10:05:55.189 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-20 10:05:55.198208 | orchestrator | 10:05:55.197 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-20 10:05:55.604580 | orchestrator | 10:05:55.604 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=8e793961-64eb-4067-8af4-1c68ea11215d] 2025-09-20 10:05:55.614626 | orchestrator | 10:05:55.614 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-20 10:05:58.238131 | orchestrator | 10:05:58.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=31ba085f-693b-4453-b385-26f20a05fd2b] 2025-09-20 10:05:58.242231 | orchestrator | 10:05:58.242 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-20 10:05:58.259087 | orchestrator | 10:05:58.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=497e6100-ba4e-4e70-85f7-b35af0c206cf] 2025-09-20 10:05:58.271654 | orchestrator | 10:05:58.271 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-20 10:05:58.280762 | orchestrator | 10:05:58.276 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=0b1518a6c3f6c00320e0c91709172d5d0541183f] 2025-09-20 10:05:58.287009 | orchestrator | 10:05:58.286 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=e293ec10-02fe-4251-bcfc-ccec4462aa3b] 2025-09-20 10:05:58.289149 | orchestrator | 10:05:58.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-20 10:05:58.297030 | orchestrator | 10:05:58.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-20 10:05:58.298188 | orchestrator | 10:05:58.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=696d6a7f-e2ae-4e31-b4d8-740f0d8ea949] 2025-09-20 10:05:58.304840 | orchestrator | 10:05:58.303 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=7249c7d6-d18e-42b1-809d-80705e221d22] 2025-09-20 10:05:58.304873 | orchestrator | 10:05:58.304 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-20 10:05:58.309316 | orchestrator | 10:05:58.309 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-20 10:05:58.312699 | orchestrator | 10:05:58.312 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=81367dded35f8aa4489c38cc04d3cb0360e5dc01] 2025-09-20 10:05:58.318974 | orchestrator | 10:05:58.318 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=21304f64-4c3c-4785-baa1-44b6b0fccd58] 2025-09-20 10:05:58.323899 | orchestrator | 10:05:58.323 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-20 10:05:58.327076 | orchestrator | 10:05:58.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-20 10:05:58.372916 | orchestrator | 10:05:58.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=c8bcd070-709d-401e-b3ff-1d1dc46d20a8] 2025-09-20 10:05:58.388586 | orchestrator | 10:05:58.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=28f1987a-6b2b-4def-9528-f2d7153ba652] 2025-09-20 10:05:58.388634 | orchestrator | 10:05:58.387 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-20 10:05:58.662818 | orchestrator | 10:05:58.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=31f92631-138d-4bd6-ad62-32e6ca0c065f] 2025-09-20 10:05:59.026761 | orchestrator | 10:05:59.026 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=fbd74b3e-c733-4d29-a234-ed34b95a0672] 2025-09-20 10:05:59.475626 | orchestrator | 10:05:59.475 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=eac5e281-bef9-47d5-840e-62184aca8574] 2025-09-20 10:05:59.483137 | orchestrator | 10:05:59.482 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-20 10:06:01.710126 | orchestrator | 10:06:01.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=3a06e929-54a6-429b-9235-a8b1ff4ea0a5] 2025-09-20 10:06:01.746462 | orchestrator | 10:06:01.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=e9a3bff6-b113-4fee-8d66-62177b4eee9d] 2025-09-20 10:06:01.758147 | orchestrator | 10:06:01.757 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=8b575f8d-db95-47cd-bd18-28166b169c8c] 2025-09-20 10:06:01.764779 | orchestrator | 10:06:01.764 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d] 2025-09-20 10:06:01.793202 | orchestrator | 10:06:01.792 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=43e39c93-643c-4c1a-98e9-cd9d81c0dd99] 2025-09-20 10:06:01.816601 | orchestrator | 10:06:01.816 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=dae07622-8d8b-4700-ac99-c09b16db109d] 2025-09-20 10:06:02.662731 | orchestrator | 10:06:02.662 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=9ef9ec9c-ef02-4ed3-b87e-cdb2b1640e5e] 2025-09-20 10:06:02.668483 | orchestrator | 10:06:02.668 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-20 10:06:02.672732 | orchestrator | 10:06:02.672 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-20 10:06:02.674317 | orchestrator | 10:06:02.674 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-20 10:06:02.896882 | orchestrator | 10:06:02.896 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=6b0a4712-311d-4a06-8b57-09fd4e33c055] 2025-09-20 10:06:02.914094 | orchestrator | 10:06:02.903 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b665a707-4e28-49e1-8fa2-b42b691a7f68] 2025-09-20 10:06:02.914161 | orchestrator | 10:06:02.912 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-20 10:06:02.917466 | orchestrator | 10:06:02.917 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-20 10:06:02.917515 | orchestrator | 10:06:02.917 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-20 10:06:02.917524 | orchestrator | 10:06:02.917 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-20 10:06:02.919668 | orchestrator | 10:06:02.919 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-20 10:06:02.921518 | orchestrator | 10:06:02.921 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-20 10:06:02.922193 | orchestrator | 10:06:02.922 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-20 10:06:02.926613 | orchestrator | 10:06:02.926 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-20 10:06:02.930316 | orchestrator | 10:06:02.930 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-20 10:06:03.072286 | orchestrator | 10:06:03.071 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d6eba103-f27e-4e75-b78b-221fff113e8c] 2025-09-20 10:06:03.079511 | orchestrator | 10:06:03.079 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-20 10:06:03.115051 | orchestrator | 10:06:03.114 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=6482ee1d-5c0d-4d12-bae6-12b613e2fb12] 2025-09-20 10:06:03.131722 | orchestrator | 10:06:03.131 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-20 10:06:03.271113 | orchestrator | 10:06:03.270 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=24234c15-c250-499a-8959-9918a6c2186c] 2025-09-20 10:06:03.288096 | orchestrator | 10:06:03.287 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-20 10:06:03.352194 | orchestrator | 10:06:03.352 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=b281eebb-3d11-4a5b-b4b1-71d870a466cc] 2025-09-20 10:06:03.367408 | orchestrator | 10:06:03.367 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-20 10:06:03.533583 | orchestrator | 10:06:03.533 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=b969bd44-c0c2-4111-b457-4d427a93b994] 2025-09-20 10:06:03.545953 | orchestrator | 10:06:03.545 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-20 10:06:03.746677 | orchestrator | 10:06:03.746 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=3f41e4a7-834d-4dcc-b1e8-840728e88237] 2025-09-20 10:06:03.759858 | orchestrator | 10:06:03.759 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-20 10:06:03.825720 | orchestrator | 10:06:03.825 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9b3e8a65-afdd-4550-bced-6b3a70839671] 2025-09-20 10:06:03.830102 | orchestrator | 10:06:03.829 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-20 10:06:03.893782 | orchestrator | 10:06:03.893 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=2cd4e572-6b9e-4168-a38b-01ac5ef51e9f] 2025-09-20 10:06:03.994106 | orchestrator | 10:06:03.993 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=e27956fb-0d16-41d5-9164-f1621697432e] 2025-09-20 10:06:04.085847 | orchestrator | 10:06:04.085 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=87c0ffa8-4a3c-48a9-ae9f-688833b11a00] 2025-09-20 10:06:04.399311 | orchestrator | 10:06:04.398 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=a95b3186-84be-4b3e-af09-8682aab69bb9] 2025-09-20 10:06:04.486921 | orchestrator | 10:06:04.486 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=c28cce0b-b065-49bd-acf5-b82d9da8cfd7] 2025-09-20 10:06:04.813080 | orchestrator | 10:06:04.812 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8093513d-7161-48ce-8eef-64b7f9062c1a] 2025-09-20 10:06:04.865291 | orchestrator | 10:06:04.864 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=ee3e8c74-9400-4ae8-a1fa-2039bcf5672c] 2025-09-20 10:06:04.982877 | orchestrator | 10:06:04.982 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=5225757f-e16e-422c-b731-a9d2af464405] 2025-09-20 10:06:05.185563 | orchestrator | 10:06:05.185 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=352c89a8-92e2-45e7-af61-09402d59a70b] 2025-09-20 10:06:07.684978 | orchestrator | 10:06:07.684 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=68c6433f-b816-46b0-a523-2091c576751d] 2025-09-20 10:06:07.708020 | orchestrator | 10:06:07.707 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-20 10:06:07.715085 | orchestrator | 10:06:07.714 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-20 10:06:07.715625 | orchestrator | 10:06:07.715 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-20 10:06:07.715847 | orchestrator | 10:06:07.715 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-20 10:06:07.725053 | orchestrator | 10:06:07.724 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-20 10:06:07.729935 | orchestrator | 10:06:07.729 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-20 10:06:07.730081 | orchestrator | 10:06:07.730 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-20 10:06:09.213540 | orchestrator | 10:06:09.213 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=de41d5a5-73fa-4273-bb5b-b879c89e9c0c] 2025-09-20 10:06:09.790255 | orchestrator | 10:06:09.228 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-20 10:06:09.790350 | orchestrator | 10:06:09.229 STDOUT terraform: local_file.inventory: Creating... 2025-09-20 10:06:09.790366 | orchestrator | 10:06:09.232 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-20 10:06:09.998532 | orchestrator | 10:06:09.998 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=de41d5a5-73fa-4273-bb5b-b879c89e9c0c] 2025-09-20 10:06:10.255544 | orchestrator | 10:06:10.255 STDOUT terraform: local_file.inventory: Creation complete after 1s [id=c06545f14a9743c4848470a02a493433fc2490c6] 2025-09-20 10:06:10.256618 | orchestrator | 10:06:10.256 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 1s [id=70a4d2e753d6bed094beae95fbd49df2c2802393] 2025-09-20 10:06:17.717957 | orchestrator | 10:06:17.717 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-20 10:06:17.720848 | orchestrator | 10:06:17.720 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-20 10:06:17.722041 | orchestrator | 10:06:17.721 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-20 10:06:17.730234 | orchestrator | 10:06:17.730 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-20 10:06:17.736754 | orchestrator | 10:06:17.736 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-20 10:06:17.736885 | orchestrator | 10:06:17.736 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-20 10:06:27.718385 | orchestrator | 10:06:27.718 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-20 10:06:27.721632 | orchestrator | 10:06:27.721 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-20 10:06:27.722718 | orchestrator | 10:06:27.722 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-20 10:06:27.731358 | orchestrator | 10:06:27.731 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-20 10:06:27.737577 | orchestrator | 10:06:27.737 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-20 10:06:27.737789 | orchestrator | 10:06:27.737 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-20 10:06:37.718838 | orchestrator | 10:06:37.718 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-20 10:06:37.721838 | orchestrator | 10:06:37.721 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-20 10:06:37.723189 | orchestrator | 10:06:37.722 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-20 10:06:37.732899 | orchestrator | 10:06:37.732 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-20 10:06:37.737708 | orchestrator | 10:06:37.737 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-20 10:06:37.737797 | orchestrator | 10:06:37.737 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-20 10:06:38.281957 | orchestrator | 10:06:38.281 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=5e2b2065-1f46-4729-b1d3-945ed505a6f4] 2025-09-20 10:06:38.358317 | orchestrator | 10:06:38.357 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=98f14752-3df6-4b9f-bdae-16ea0ce32b2f] 2025-09-20 10:06:38.369648 | orchestrator | 10:06:38.369 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=7df7b6e0-a63a-47d3-9b6b-e2287444af8c] 2025-09-20 10:06:38.369744 | orchestrator | 10:06:38.369 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=60670360-02a8-4f86-933f-31c9fbd6d29e] 2025-09-20 10:06:38.402313 | orchestrator | 10:06:38.401 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=3893345b-4422-4bf4-99d9-2ed7f57c22fe] 2025-09-20 10:06:38.415089 | orchestrator | 10:06:38.414 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=e8643f37-7d21-48b1-aee4-932c6c4eba92] 2025-09-20 10:06:38.436367 | orchestrator | 10:06:38.436 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-20 10:06:38.445802 | orchestrator | 10:06:38.445 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3010418786575753843] 2025-09-20 10:06:38.450655 | orchestrator | 10:06:38.450 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-20 10:06:38.453188 | orchestrator | 10:06:38.453 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-20 10:06:38.466829 | orchestrator | 10:06:38.466 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-20 10:06:38.472882 | orchestrator | 10:06:38.471 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-20 10:06:38.477482 | orchestrator | 10:06:38.477 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-20 10:06:38.485951 | orchestrator | 10:06:38.485 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-20 10:06:38.486071 | orchestrator | 10:06:38.485 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-20 10:06:38.493227 | orchestrator | 10:06:38.493 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-20 10:06:38.493667 | orchestrator | 10:06:38.493 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-20 10:06:38.516763 | orchestrator | 10:06:38.516 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-20 10:06:41.850117 | orchestrator | 10:06:41.849 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=3893345b-4422-4bf4-99d9-2ed7f57c22fe/7249c7d6-d18e-42b1-809d-80705e221d22] 2025-09-20 10:06:41.885598 | orchestrator | 10:06:41.885 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=60670360-02a8-4f86-933f-31c9fbd6d29e/e293ec10-02fe-4251-bcfc-ccec4462aa3b] 2025-09-20 10:06:41.913569 | orchestrator | 10:06:41.913 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=3893345b-4422-4bf4-99d9-2ed7f57c22fe/21304f64-4c3c-4785-baa1-44b6b0fccd58] 2025-09-20 10:06:41.936516 | orchestrator | 10:06:41.936 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=5e2b2065-1f46-4729-b1d3-945ed505a6f4/696d6a7f-e2ae-4e31-b4d8-740f0d8ea949] 2025-09-20 10:06:41.944388 | orchestrator | 10:06:41.943 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=60670360-02a8-4f86-933f-31c9fbd6d29e/c8bcd070-709d-401e-b3ff-1d1dc46d20a8] 2025-09-20 10:06:42.051243 | orchestrator | 10:06:42.050 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=5e2b2065-1f46-4729-b1d3-945ed505a6f4/31f92631-138d-4bd6-ad62-32e6ca0c065f] 2025-09-20 10:06:48.040417 | orchestrator | 10:06:48.039 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=5e2b2065-1f46-4729-b1d3-945ed505a6f4/497e6100-ba4e-4e70-85f7-b35af0c206cf] 2025-09-20 10:06:48.054286 | orchestrator | 10:06:48.053 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=3893345b-4422-4bf4-99d9-2ed7f57c22fe/28f1987a-6b2b-4def-9528-f2d7153ba652] 2025-09-20 10:06:48.073525 | orchestrator | 10:06:48.073 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=60670360-02a8-4f86-933f-31c9fbd6d29e/31ba085f-693b-4453-b385-26f20a05fd2b] 2025-09-20 10:06:48.517134 | orchestrator | 10:06:48.516 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-20 10:06:58.517364 | orchestrator | 10:06:58.517 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-20 10:06:58.949834 | orchestrator | 10:06:58.949 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=efb47b8b-5bde-4655-baee-cbdc4a571b21] 2025-09-20 10:06:58.992224 | orchestrator | 10:06:58.992 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-20 10:06:58.992325 | orchestrator | 10:06:58.992 STDOUT terraform: Outputs: 2025-09-20 10:06:58.992337 | orchestrator | 10:06:58.992 STDOUT terraform: manager_address = 2025-09-20 10:06:58.992345 | orchestrator | 10:06:58.992 STDOUT terraform: private_key = 2025-09-20 10:06:59.326348 | orchestrator | ok: Runtime: 0:01:12.263156 2025-09-20 10:06:59.357598 | 2025-09-20 10:06:59.357738 | TASK [Create infrastructure (stable)] 2025-09-20 10:06:59.888082 | orchestrator | skipping: Conditional result was False 2025-09-20 10:06:59.906223 | 2025-09-20 10:06:59.906394 | TASK [Fetch manager address] 2025-09-20 10:07:00.352572 | orchestrator | ok 2025-09-20 10:07:00.362916 | 2025-09-20 10:07:00.363069 | TASK [Set manager_host address] 2025-09-20 10:07:00.442602 | orchestrator | ok 2025-09-20 10:07:00.452103 | 2025-09-20 10:07:00.452218 | LOOP [Update ansible collections] 2025-09-20 10:07:01.337356 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 10:07:01.337852 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-20 10:07:01.337928 | orchestrator | Starting galaxy collection install process 2025-09-20 10:07:01.337970 | orchestrator | Process install dependency map 2025-09-20 10:07:01.338012 | orchestrator | Starting collection install process 2025-09-20 10:07:01.338046 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-09-20 10:07:01.338085 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-09-20 10:07:01.338127 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-20 10:07:01.338195 | orchestrator | ok: Item: commons Runtime: 0:00:00.566439 2025-09-20 10:07:03.411544 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-20 10:07:03.411780 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 10:07:03.411851 | orchestrator | Starting galaxy collection install process 2025-09-20 10:07:03.411904 | orchestrator | Process install dependency map 2025-09-20 10:07:03.411953 | orchestrator | Starting collection install process 2025-09-20 10:07:03.411993 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-09-20 10:07:03.412037 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-09-20 10:07:03.412080 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-20 10:07:03.412133 | orchestrator | ok: Item: services Runtime: 0:00:01.774629 2025-09-20 10:07:03.430069 | 2025-09-20 10:07:03.430216 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-20 10:07:13.985644 | orchestrator | ok 2025-09-20 10:07:13.996309 | 2025-09-20 10:07:13.996407 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-20 10:08:14.036095 | orchestrator | ok 2025-09-20 10:08:14.046357 | 2025-09-20 10:08:14.046479 | TASK [Fetch manager ssh hostkey] 2025-09-20 10:08:15.615830 | orchestrator | Output suppressed because no_log was given 2025-09-20 10:08:15.631738 | 2025-09-20 10:08:15.631937 | TASK [Get ssh keypair from terraform environment] 2025-09-20 10:08:16.168044 | orchestrator | ok: Runtime: 0:00:00.007554 2025-09-20 10:08:16.180091 | 2025-09-20 10:08:16.180248 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-20 10:08:16.225134 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-20 10:08:16.233731 | 2025-09-20 10:08:16.233842 | TASK [Run manager part 0] 2025-09-20 10:08:17.100984 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 10:08:17.145030 | orchestrator | 2025-09-20 10:08:17.145104 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-20 10:08:17.145119 | orchestrator | 2025-09-20 10:08:17.145142 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-20 10:08:19.186119 | orchestrator | ok: [testbed-manager] 2025-09-20 10:08:19.186189 | orchestrator | 2025-09-20 10:08:19.186232 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-20 10:08:19.186253 | orchestrator | 2025-09-20 10:08:19.186273 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:08:21.030344 | orchestrator | ok: [testbed-manager] 2025-09-20 10:08:21.030434 | orchestrator | 2025-09-20 10:08:21.030452 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-20 10:08:21.684735 | orchestrator | ok: [testbed-manager] 2025-09-20 10:08:21.684805 | orchestrator | 2025-09-20 10:08:21.684813 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-20 10:08:21.722940 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.722996 | orchestrator | 2025-09-20 10:08:21.723008 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-20 10:08:21.756803 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.756893 | orchestrator | 2025-09-20 10:08:21.756909 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-20 10:08:21.781833 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.781920 | orchestrator | 2025-09-20 10:08:21.781936 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-20 10:08:21.804211 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.804272 | orchestrator | 2025-09-20 10:08:21.804283 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-20 10:08:21.833467 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.833528 | orchestrator | 2025-09-20 10:08:21.833540 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-20 10:08:21.862052 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.862140 | orchestrator | 2025-09-20 10:08:21.862160 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-20 10:08:21.893331 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:08:21.893384 | orchestrator | 2025-09-20 10:08:21.893393 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-20 10:08:22.626563 | orchestrator | changed: [testbed-manager] 2025-09-20 10:08:22.626610 | orchestrator | 2025-09-20 10:08:22.626616 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-20 10:11:01.722505 | orchestrator | changed: [testbed-manager] 2025-09-20 10:11:01.722598 | orchestrator | 2025-09-20 10:11:01.722618 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-20 10:12:24.687611 | orchestrator | changed: [testbed-manager] 2025-09-20 10:12:24.687681 | orchestrator | 2025-09-20 10:12:24.687691 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-20 10:12:49.016451 | orchestrator | changed: [testbed-manager] 2025-09-20 10:12:49.016550 | orchestrator | 2025-09-20 10:12:49.016570 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-20 10:12:58.323104 | orchestrator | changed: [testbed-manager] 2025-09-20 10:12:58.323308 | orchestrator | 2025-09-20 10:12:58.323331 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-20 10:12:58.370700 | orchestrator | ok: [testbed-manager] 2025-09-20 10:12:58.370762 | orchestrator | 2025-09-20 10:12:58.370794 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-20 10:12:59.166819 | orchestrator | ok: [testbed-manager] 2025-09-20 10:12:59.166902 | orchestrator | 2025-09-20 10:12:59.166920 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-20 10:12:59.940565 | orchestrator | changed: [testbed-manager] 2025-09-20 10:12:59.940610 | orchestrator | 2025-09-20 10:12:59.940620 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-20 10:13:06.506741 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:06.506840 | orchestrator | 2025-09-20 10:13:06.506881 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-20 10:13:12.299891 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:12.299923 | orchestrator | 2025-09-20 10:13:12.299932 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-20 10:13:14.893686 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:14.893871 | orchestrator | 2025-09-20 10:13:14.893891 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-20 10:13:16.711425 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:16.711468 | orchestrator | 2025-09-20 10:13:16.711477 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-20 10:13:17.930346 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-20 10:13:17.930391 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-20 10:13:17.930399 | orchestrator | 2025-09-20 10:13:17.930407 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-20 10:13:17.972931 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-20 10:13:17.973009 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-20 10:13:17.973024 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-20 10:13:17.973036 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-20 10:13:22.189082 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-20 10:13:22.189167 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-20 10:13:22.189180 | orchestrator | 2025-09-20 10:13:22.189192 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-20 10:13:22.729870 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:22.729955 | orchestrator | 2025-09-20 10:13:22.729970 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-20 10:13:42.479306 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-20 10:13:42.479397 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-20 10:13:42.479415 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-20 10:13:42.479427 | orchestrator | 2025-09-20 10:13:42.479440 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-20 10:13:44.648332 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-20 10:13:44.648906 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-20 10:13:44.648925 | orchestrator | 2025-09-20 10:13:44.648935 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-20 10:13:44.648943 | orchestrator | 2025-09-20 10:13:44.648951 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:13:45.960340 | orchestrator | ok: [testbed-manager] 2025-09-20 10:13:45.960429 | orchestrator | 2025-09-20 10:13:45.960446 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-20 10:13:46.003514 | orchestrator | ok: [testbed-manager] 2025-09-20 10:13:46.003611 | orchestrator | 2025-09-20 10:13:46.003636 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-20 10:13:46.062412 | orchestrator | ok: [testbed-manager] 2025-09-20 10:13:46.062493 | orchestrator | 2025-09-20 10:13:46.062508 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-20 10:13:46.814517 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:46.814600 | orchestrator | 2025-09-20 10:13:46.814615 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-20 10:13:47.556102 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:47.556162 | orchestrator | 2025-09-20 10:13:47.556172 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-20 10:13:48.963529 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-20 10:13:48.963616 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-20 10:13:48.963630 | orchestrator | 2025-09-20 10:13:48.963658 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-20 10:13:50.364004 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:50.364124 | orchestrator | 2025-09-20 10:13:50.364144 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-20 10:13:52.121180 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:13:52.121252 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-20 10:13:52.121263 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:13:52.121272 | orchestrator | 2025-09-20 10:13:52.121282 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-20 10:13:52.179584 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:13:52.179668 | orchestrator | 2025-09-20 10:13:52.179685 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-20 10:13:52.752267 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:52.752367 | orchestrator | 2025-09-20 10:13:52.752387 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-20 10:13:52.819838 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:13:52.819920 | orchestrator | 2025-09-20 10:13:52.819936 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-20 10:13:53.677928 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 10:13:53.678002 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:53.678047 | orchestrator | 2025-09-20 10:13:53.678067 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-20 10:13:53.715852 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:13:53.715933 | orchestrator | 2025-09-20 10:13:53.715948 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-20 10:13:53.750100 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:13:53.750172 | orchestrator | 2025-09-20 10:13:53.750188 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-20 10:13:53.779836 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:13:53.779902 | orchestrator | 2025-09-20 10:13:53.779916 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-20 10:13:53.832431 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:13:53.832568 | orchestrator | 2025-09-20 10:13:53.832589 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-20 10:13:54.600054 | orchestrator | ok: [testbed-manager] 2025-09-20 10:13:54.600143 | orchestrator | 2025-09-20 10:13:54.600159 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-20 10:13:54.600171 | orchestrator | 2025-09-20 10:13:54.600182 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:13:55.994570 | orchestrator | ok: [testbed-manager] 2025-09-20 10:13:55.994656 | orchestrator | 2025-09-20 10:13:55.994672 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-20 10:13:56.956873 | orchestrator | changed: [testbed-manager] 2025-09-20 10:13:56.956909 | orchestrator | 2025-09-20 10:13:56.956914 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:13:56.956920 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-20 10:13:56.956924 | orchestrator | 2025-09-20 10:13:57.457269 | orchestrator | ok: Runtime: 0:05:40.528961 2025-09-20 10:13:57.470176 | 2025-09-20 10:13:57.470312 | TASK [Point out that the log in on the manager is now possible] 2025-09-20 10:13:57.501197 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-20 10:13:57.508814 | 2025-09-20 10:13:57.508932 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-20 10:13:57.539049 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-20 10:13:57.546537 | 2025-09-20 10:13:57.546649 | TASK [Run manager part 1 + 2] 2025-09-20 10:13:58.563210 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 10:13:58.626283 | orchestrator | 2025-09-20 10:13:58.626333 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-20 10:13:58.626340 | orchestrator | 2025-09-20 10:13:58.626352 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:14:01.191915 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:01.191973 | orchestrator | 2025-09-20 10:14:01.191997 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-20 10:14:01.232878 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:14:01.232929 | orchestrator | 2025-09-20 10:14:01.232939 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-20 10:14:01.280316 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:01.280366 | orchestrator | 2025-09-20 10:14:01.280380 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-20 10:14:01.319211 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:01.319257 | orchestrator | 2025-09-20 10:14:01.319267 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-20 10:14:01.386734 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:01.386787 | orchestrator | 2025-09-20 10:14:01.386818 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-20 10:14:01.445740 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:01.445808 | orchestrator | 2025-09-20 10:14:01.445819 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-20 10:14:01.487361 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-20 10:14:01.487398 | orchestrator | 2025-09-20 10:14:01.487403 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-20 10:14:02.203955 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:02.204002 | orchestrator | 2025-09-20 10:14:02.204012 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-20 10:14:02.246399 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:14:02.246440 | orchestrator | 2025-09-20 10:14:02.246449 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-20 10:14:03.550053 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:03.550103 | orchestrator | 2025-09-20 10:14:03.550113 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-20 10:14:04.125880 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:04.125921 | orchestrator | 2025-09-20 10:14:04.125928 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-20 10:14:05.287767 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:05.287855 | orchestrator | 2025-09-20 10:14:05.287866 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-20 10:14:22.239776 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:22.239872 | orchestrator | 2025-09-20 10:14:22.239889 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-20 10:14:22.894516 | orchestrator | ok: [testbed-manager] 2025-09-20 10:14:22.894599 | orchestrator | 2025-09-20 10:14:22.894617 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-20 10:14:22.948357 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:14:22.948415 | orchestrator | 2025-09-20 10:14:22.948422 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-20 10:14:23.980690 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:23.980779 | orchestrator | 2025-09-20 10:14:23.980796 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-20 10:14:24.930459 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:24.930501 | orchestrator | 2025-09-20 10:14:24.930510 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-20 10:14:25.514287 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:25.514373 | orchestrator | 2025-09-20 10:14:25.514389 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-20 10:14:25.548761 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-20 10:14:25.548886 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-20 10:14:25.548902 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-20 10:14:25.548914 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-20 10:14:28.690876 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:28.690981 | orchestrator | 2025-09-20 10:14:28.691000 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-20 10:14:37.816652 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-20 10:14:37.816695 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-20 10:14:37.816703 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-20 10:14:37.816710 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-20 10:14:37.816719 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-20 10:14:37.816725 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-20 10:14:37.816731 | orchestrator | 2025-09-20 10:14:37.816737 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-20 10:14:38.881203 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:38.881286 | orchestrator | 2025-09-20 10:14:38.881302 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-20 10:14:38.922413 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:14:38.922481 | orchestrator | 2025-09-20 10:14:38.922495 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-20 10:14:41.912563 | orchestrator | changed: [testbed-manager] 2025-09-20 10:14:41.912602 | orchestrator | 2025-09-20 10:14:41.912610 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-20 10:14:41.953056 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:14:41.953093 | orchestrator | 2025-09-20 10:14:41.953102 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-20 10:16:16.757191 | orchestrator | changed: [testbed-manager] 2025-09-20 10:16:16.757251 | orchestrator | 2025-09-20 10:16:16.757261 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-20 10:16:17.920247 | orchestrator | ok: [testbed-manager] 2025-09-20 10:16:17.920285 | orchestrator | 2025-09-20 10:16:17.920292 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:16:17.920299 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-20 10:16:17.920304 | orchestrator | 2025-09-20 10:16:18.159048 | orchestrator | ok: Runtime: 0:02:20.165177 2025-09-20 10:16:18.182438 | 2025-09-20 10:16:18.182643 | TASK [Reboot manager] 2025-09-20 10:16:19.728034 | orchestrator | ok: Runtime: 0:00:00.985180 2025-09-20 10:16:19.743879 | 2025-09-20 10:16:19.744027 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-20 10:16:36.143290 | orchestrator | ok 2025-09-20 10:16:36.151270 | 2025-09-20 10:16:36.151385 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-20 10:17:36.196572 | orchestrator | ok 2025-09-20 10:17:36.210135 | 2025-09-20 10:17:36.210290 | TASK [Deploy manager + bootstrap nodes] 2025-09-20 10:17:38.966278 | orchestrator | 2025-09-20 10:17:38.966471 | orchestrator | # DEPLOY MANAGER 2025-09-20 10:17:38.966496 | orchestrator | 2025-09-20 10:17:38.966511 | orchestrator | + set -e 2025-09-20 10:17:38.966524 | orchestrator | + echo 2025-09-20 10:17:38.966538 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-20 10:17:38.966555 | orchestrator | + echo 2025-09-20 10:17:38.966605 | orchestrator | + cat /opt/manager-vars.sh 2025-09-20 10:17:38.970133 | orchestrator | export NUMBER_OF_NODES=6 2025-09-20 10:17:38.970157 | orchestrator | 2025-09-20 10:17:38.970170 | orchestrator | export CEPH_VERSION=reef 2025-09-20 10:17:38.970184 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-20 10:17:38.970197 | orchestrator | export MANAGER_VERSION=latest 2025-09-20 10:17:38.970219 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-20 10:17:38.970230 | orchestrator | 2025-09-20 10:17:38.970249 | orchestrator | export ARA=false 2025-09-20 10:17:38.970260 | orchestrator | export DEPLOY_MODE=manager 2025-09-20 10:17:38.970278 | orchestrator | export TEMPEST=false 2025-09-20 10:17:38.970289 | orchestrator | export IS_ZUUL=true 2025-09-20 10:17:38.970300 | orchestrator | 2025-09-20 10:17:38.970318 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:17:38.970330 | orchestrator | export EXTERNAL_API=false 2025-09-20 10:17:38.970341 | orchestrator | 2025-09-20 10:17:38.970351 | orchestrator | export IMAGE_USER=ubuntu 2025-09-20 10:17:38.970366 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-20 10:17:38.970377 | orchestrator | 2025-09-20 10:17:38.970388 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-20 10:17:38.970404 | orchestrator | 2025-09-20 10:17:38.970415 | orchestrator | + echo 2025-09-20 10:17:38.970427 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 10:17:38.971202 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 10:17:38.971219 | orchestrator | ++ INTERACTIVE=false 2025-09-20 10:17:38.971231 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 10:17:38.971244 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 10:17:38.971368 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 10:17:38.971384 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 10:17:38.971396 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 10:17:38.971407 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 10:17:38.971418 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 10:17:38.971429 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 10:17:38.971440 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 10:17:38.971455 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:17:38.971466 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:17:38.971477 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 10:17:38.971496 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 10:17:38.971507 | orchestrator | ++ export ARA=false 2025-09-20 10:17:38.971518 | orchestrator | ++ ARA=false 2025-09-20 10:17:38.971529 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 10:17:38.971539 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 10:17:38.971550 | orchestrator | ++ export TEMPEST=false 2025-09-20 10:17:38.971560 | orchestrator | ++ TEMPEST=false 2025-09-20 10:17:38.971571 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 10:17:38.971581 | orchestrator | ++ IS_ZUUL=true 2025-09-20 10:17:38.971592 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:17:38.971603 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:17:38.971617 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 10:17:38.971628 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 10:17:38.971639 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 10:17:38.971649 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 10:17:38.971660 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 10:17:38.971671 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 10:17:38.971682 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 10:17:38.971692 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 10:17:38.971703 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-20 10:17:39.032122 | orchestrator | + docker version 2025-09-20 10:17:39.318093 | orchestrator | Client: Docker Engine - Community 2025-09-20 10:17:39.318177 | orchestrator | Version: 27.5.1 2025-09-20 10:17:39.318189 | orchestrator | API version: 1.47 2025-09-20 10:17:39.318197 | orchestrator | Go version: go1.22.11 2025-09-20 10:17:39.318205 | orchestrator | Git commit: 9f9e405 2025-09-20 10:17:39.318213 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-20 10:17:39.318222 | orchestrator | OS/Arch: linux/amd64 2025-09-20 10:17:39.318229 | orchestrator | Context: default 2025-09-20 10:17:39.318236 | orchestrator | 2025-09-20 10:17:39.318244 | orchestrator | Server: Docker Engine - Community 2025-09-20 10:17:39.318251 | orchestrator | Engine: 2025-09-20 10:17:39.318259 | orchestrator | Version: 27.5.1 2025-09-20 10:17:39.318267 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-20 10:17:39.318301 | orchestrator | Go version: go1.22.11 2025-09-20 10:17:39.318308 | orchestrator | Git commit: 4c9b3b0 2025-09-20 10:17:39.318316 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-20 10:17:39.318323 | orchestrator | OS/Arch: linux/amd64 2025-09-20 10:17:39.318330 | orchestrator | Experimental: false 2025-09-20 10:17:39.318337 | orchestrator | containerd: 2025-09-20 10:17:39.318345 | orchestrator | Version: 1.7.27 2025-09-20 10:17:39.318352 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-20 10:17:39.318359 | orchestrator | runc: 2025-09-20 10:17:39.318367 | orchestrator | Version: 1.2.5 2025-09-20 10:17:39.318374 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-20 10:17:39.318381 | orchestrator | docker-init: 2025-09-20 10:17:39.318728 | orchestrator | Version: 0.19.0 2025-09-20 10:17:39.318746 | orchestrator | GitCommit: de40ad0 2025-09-20 10:17:39.322261 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-20 10:17:39.332990 | orchestrator | + set -e 2025-09-20 10:17:39.333018 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 10:17:39.333033 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 10:17:39.333048 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 10:17:39.333063 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 10:17:39.333078 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 10:17:39.333093 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 10:17:39.333109 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 10:17:39.333124 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:17:39.333139 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:17:39.333155 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 10:17:39.333164 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 10:17:39.333179 | orchestrator | ++ export ARA=false 2025-09-20 10:17:39.333194 | orchestrator | ++ ARA=false 2025-09-20 10:17:39.333209 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 10:17:39.333224 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 10:17:39.333245 | orchestrator | ++ export TEMPEST=false 2025-09-20 10:17:39.333256 | orchestrator | ++ TEMPEST=false 2025-09-20 10:17:39.333265 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 10:17:39.333273 | orchestrator | ++ IS_ZUUL=true 2025-09-20 10:17:39.333282 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:17:39.333291 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:17:39.333299 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 10:17:39.333308 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 10:17:39.333316 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 10:17:39.333324 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 10:17:39.333333 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 10:17:39.333342 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 10:17:39.333350 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 10:17:39.333359 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 10:17:39.333370 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 10:17:39.333385 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 10:17:39.333401 | orchestrator | ++ INTERACTIVE=false 2025-09-20 10:17:39.333415 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 10:17:39.333434 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 10:17:39.333550 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:17:39.333566 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 10:17:39.333575 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-20 10:17:39.340502 | orchestrator | + set -e 2025-09-20 10:17:39.341213 | orchestrator | + VERSION=reef 2025-09-20 10:17:39.341853 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:17:39.349218 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-20 10:17:39.349254 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:17:39.355811 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-20 10:17:39.364444 | orchestrator | + set -e 2025-09-20 10:17:39.364464 | orchestrator | + VERSION=2024.2 2025-09-20 10:17:39.365974 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:17:39.369737 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-20 10:17:39.369782 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:17:39.376256 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-20 10:17:39.377277 | orchestrator | ++ semver latest 7.0.0 2025-09-20 10:17:39.445968 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 10:17:39.446061 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 10:17:39.446077 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-20 10:17:39.446089 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-20 10:17:39.533344 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-20 10:17:39.539953 | orchestrator | + source /opt/venv/bin/activate 2025-09-20 10:17:39.541232 | orchestrator | ++ deactivate nondestructive 2025-09-20 10:17:39.541252 | orchestrator | ++ '[' -n '' ']' 2025-09-20 10:17:39.541265 | orchestrator | ++ '[' -n '' ']' 2025-09-20 10:17:39.541276 | orchestrator | ++ hash -r 2025-09-20 10:17:39.541287 | orchestrator | ++ '[' -n '' ']' 2025-09-20 10:17:39.541298 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-20 10:17:39.541428 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-20 10:17:39.541443 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-20 10:17:39.541549 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-20 10:17:39.541565 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-20 10:17:39.541576 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-20 10:17:39.541587 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-20 10:17:39.541598 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-20 10:17:39.541610 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-20 10:17:39.541621 | orchestrator | ++ export PATH 2025-09-20 10:17:39.541740 | orchestrator | ++ '[' -n '' ']' 2025-09-20 10:17:39.541754 | orchestrator | ++ '[' -z '' ']' 2025-09-20 10:17:39.541765 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-20 10:17:39.541776 | orchestrator | ++ PS1='(venv) ' 2025-09-20 10:17:39.541787 | orchestrator | ++ export PS1 2025-09-20 10:17:39.541797 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-20 10:17:39.541808 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-20 10:17:39.541819 | orchestrator | ++ hash -r 2025-09-20 10:17:39.541853 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-20 10:17:40.916556 | orchestrator | 2025-09-20 10:17:40.916675 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-20 10:17:40.916692 | orchestrator | 2025-09-20 10:17:40.916766 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-20 10:17:41.517638 | orchestrator | ok: [testbed-manager] 2025-09-20 10:17:41.517741 | orchestrator | 2025-09-20 10:17:41.517757 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-20 10:17:42.563996 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:42.564107 | orchestrator | 2025-09-20 10:17:42.564124 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-20 10:17:42.564138 | orchestrator | 2025-09-20 10:17:42.564149 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:17:44.986811 | orchestrator | ok: [testbed-manager] 2025-09-20 10:17:44.986936 | orchestrator | 2025-09-20 10:17:44.986954 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-20 10:17:45.049943 | orchestrator | ok: [testbed-manager] 2025-09-20 10:17:45.050069 | orchestrator | 2025-09-20 10:17:45.050088 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-20 10:17:45.521267 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:45.521351 | orchestrator | 2025-09-20 10:17:45.521364 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-20 10:17:45.558704 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:17:45.558741 | orchestrator | 2025-09-20 10:17:45.558750 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-20 10:17:45.908951 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:45.909069 | orchestrator | 2025-09-20 10:17:45.909092 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-20 10:17:45.957978 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:17:45.958065 | orchestrator | 2025-09-20 10:17:45.958071 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-20 10:17:46.369677 | orchestrator | ok: [testbed-manager] 2025-09-20 10:17:46.369774 | orchestrator | 2025-09-20 10:17:46.369790 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-20 10:17:46.512746 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:17:46.512828 | orchestrator | 2025-09-20 10:17:46.512843 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-20 10:17:46.512855 | orchestrator | 2025-09-20 10:17:46.512869 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:17:48.275346 | orchestrator | ok: [testbed-manager] 2025-09-20 10:17:48.275439 | orchestrator | 2025-09-20 10:17:48.275455 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-20 10:17:48.389444 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-20 10:17:48.389520 | orchestrator | 2025-09-20 10:17:48.389534 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-20 10:17:48.446440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-20 10:17:48.446473 | orchestrator | 2025-09-20 10:17:48.446485 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-20 10:17:49.623388 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-20 10:17:49.623486 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-20 10:17:49.623500 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-20 10:17:49.623513 | orchestrator | 2025-09-20 10:17:49.623525 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-20 10:17:51.495443 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-20 10:17:51.495566 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-20 10:17:51.495582 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-20 10:17:51.495605 | orchestrator | 2025-09-20 10:17:51.496380 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-20 10:17:52.175331 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 10:17:52.175415 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:52.175429 | orchestrator | 2025-09-20 10:17:52.175440 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-20 10:17:52.828363 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 10:17:52.828455 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:52.828470 | orchestrator | 2025-09-20 10:17:52.828483 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-20 10:17:52.893924 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:17:52.893988 | orchestrator | 2025-09-20 10:17:52.894001 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-20 10:17:53.271447 | orchestrator | ok: [testbed-manager] 2025-09-20 10:17:53.271531 | orchestrator | 2025-09-20 10:17:53.271546 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-20 10:17:53.347361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-20 10:17:53.347429 | orchestrator | 2025-09-20 10:17:53.347443 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-20 10:17:54.418364 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:54.418464 | orchestrator | 2025-09-20 10:17:54.418481 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-20 10:17:55.318502 | orchestrator | changed: [testbed-manager] 2025-09-20 10:17:55.318600 | orchestrator | 2025-09-20 10:17:55.318616 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-20 10:18:07.278673 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:07.278777 | orchestrator | 2025-09-20 10:18:07.278794 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-20 10:18:07.327918 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:18:07.327961 | orchestrator | 2025-09-20 10:18:07.327976 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-20 10:18:07.327988 | orchestrator | 2025-09-20 10:18:07.327999 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:18:09.201101 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:09.201207 | orchestrator | 2025-09-20 10:18:09.201258 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-20 10:18:09.317263 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-20 10:18:09.317352 | orchestrator | 2025-09-20 10:18:09.317367 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-20 10:18:09.384302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 10:18:09.384376 | orchestrator | 2025-09-20 10:18:09.384389 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-20 10:18:11.923235 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:11.923336 | orchestrator | 2025-09-20 10:18:11.923352 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-20 10:18:11.977241 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:11.977295 | orchestrator | 2025-09-20 10:18:11.977310 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-20 10:18:12.081545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-20 10:18:12.081597 | orchestrator | 2025-09-20 10:18:12.081609 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-20 10:18:14.943078 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-20 10:18:14.943184 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-20 10:18:14.943199 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-20 10:18:14.943211 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-20 10:18:14.943223 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-20 10:18:14.943234 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-20 10:18:14.943244 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-20 10:18:14.943255 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-20 10:18:14.943266 | orchestrator | 2025-09-20 10:18:14.943277 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-20 10:18:15.597301 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:15.597385 | orchestrator | 2025-09-20 10:18:15.597395 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-20 10:18:16.240132 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:16.240215 | orchestrator | 2025-09-20 10:18:16.240225 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-20 10:18:16.338003 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-20 10:18:16.338170 | orchestrator | 2025-09-20 10:18:16.338191 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-20 10:18:17.608233 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-20 10:18:17.608337 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-20 10:18:17.608351 | orchestrator | 2025-09-20 10:18:17.608364 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-20 10:18:18.263248 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:18.263335 | orchestrator | 2025-09-20 10:18:18.263349 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-20 10:18:18.325282 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:18:18.325336 | orchestrator | 2025-09-20 10:18:18.325344 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-20 10:18:18.409238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-20 10:18:18.409299 | orchestrator | 2025-09-20 10:18:18.409305 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-20 10:18:19.060090 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:19.060179 | orchestrator | 2025-09-20 10:18:19.060194 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-20 10:18:19.132290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-20 10:18:19.132374 | orchestrator | 2025-09-20 10:18:19.132388 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-20 10:18:20.491970 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 10:18:20.492016 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 10:18:20.492021 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:20.492027 | orchestrator | 2025-09-20 10:18:20.492031 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-20 10:18:21.053165 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:21.053264 | orchestrator | 2025-09-20 10:18:21.053279 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-20 10:18:21.105718 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:18:21.105776 | orchestrator | 2025-09-20 10:18:21.105788 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-20 10:18:21.200275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-20 10:18:21.200357 | orchestrator | 2025-09-20 10:18:21.200371 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-20 10:18:21.694671 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:21.694768 | orchestrator | 2025-09-20 10:18:21.694782 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-20 10:18:22.057521 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:22.057616 | orchestrator | 2025-09-20 10:18:22.057629 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-20 10:18:23.248162 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-20 10:18:23.248250 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-20 10:18:23.248266 | orchestrator | 2025-09-20 10:18:23.248279 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-20 10:18:23.817740 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:23.817825 | orchestrator | 2025-09-20 10:18:23.817840 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-20 10:18:24.189744 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:24.189871 | orchestrator | 2025-09-20 10:18:24.189917 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-20 10:18:24.535503 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:24.535584 | orchestrator | 2025-09-20 10:18:24.535598 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-20 10:18:24.581838 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:18:24.581936 | orchestrator | 2025-09-20 10:18:24.581951 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-20 10:18:24.681410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-20 10:18:24.681495 | orchestrator | 2025-09-20 10:18:24.681518 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-20 10:18:24.732094 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:24.732136 | orchestrator | 2025-09-20 10:18:24.732149 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-20 10:18:26.618527 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-20 10:18:26.618646 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-20 10:18:26.618672 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-20 10:18:26.618693 | orchestrator | 2025-09-20 10:18:26.618714 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-20 10:18:27.301276 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:27.301362 | orchestrator | 2025-09-20 10:18:27.301381 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-20 10:18:27.968822 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:27.968925 | orchestrator | 2025-09-20 10:18:27.968942 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-20 10:18:28.621155 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:28.621235 | orchestrator | 2025-09-20 10:18:28.621248 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-20 10:18:28.708523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-20 10:18:28.708597 | orchestrator | 2025-09-20 10:18:28.708610 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-20 10:18:28.749374 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:28.749428 | orchestrator | 2025-09-20 10:18:28.749443 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-20 10:18:29.409563 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-20 10:18:29.409673 | orchestrator | 2025-09-20 10:18:29.409700 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-20 10:18:29.505074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-20 10:18:29.505129 | orchestrator | 2025-09-20 10:18:29.505141 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-20 10:18:30.162163 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:30.162248 | orchestrator | 2025-09-20 10:18:30.162263 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-20 10:18:30.697843 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:30.697948 | orchestrator | 2025-09-20 10:18:30.697964 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-20 10:18:30.746175 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:18:30.746235 | orchestrator | 2025-09-20 10:18:30.746248 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-20 10:18:30.796207 | orchestrator | ok: [testbed-manager] 2025-09-20 10:18:30.796264 | orchestrator | 2025-09-20 10:18:30.796280 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-20 10:18:31.561567 | orchestrator | changed: [testbed-manager] 2025-09-20 10:18:31.561668 | orchestrator | 2025-09-20 10:18:31.561682 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-20 10:19:41.298541 | orchestrator | changed: [testbed-manager] 2025-09-20 10:19:41.298655 | orchestrator | 2025-09-20 10:19:41.298673 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-20 10:19:42.196439 | orchestrator | ok: [testbed-manager] 2025-09-20 10:19:42.196544 | orchestrator | 2025-09-20 10:19:42.196558 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-20 10:19:42.241476 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:19:42.241506 | orchestrator | 2025-09-20 10:19:42.241521 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-20 10:19:44.795483 | orchestrator | changed: [testbed-manager] 2025-09-20 10:19:44.795584 | orchestrator | 2025-09-20 10:19:44.795601 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-20 10:19:44.846543 | orchestrator | ok: [testbed-manager] 2025-09-20 10:19:44.846600 | orchestrator | 2025-09-20 10:19:44.846613 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-20 10:19:44.846625 | orchestrator | 2025-09-20 10:19:44.846636 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-20 10:19:44.887437 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:19:44.887520 | orchestrator | 2025-09-20 10:19:44.887536 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-20 10:20:44.935606 | orchestrator | Pausing for 60 seconds 2025-09-20 10:20:44.935729 | orchestrator | changed: [testbed-manager] 2025-09-20 10:20:44.935744 | orchestrator | 2025-09-20 10:20:44.935757 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-20 10:20:50.108103 | orchestrator | changed: [testbed-manager] 2025-09-20 10:20:50.108212 | orchestrator | 2025-09-20 10:20:50.108228 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-20 10:21:31.886609 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-20 10:21:31.886718 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-20 10:21:31.886732 | orchestrator | changed: [testbed-manager] 2025-09-20 10:21:31.886772 | orchestrator | 2025-09-20 10:21:31.886784 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-20 10:21:41.994385 | orchestrator | changed: [testbed-manager] 2025-09-20 10:21:41.994515 | orchestrator | 2025-09-20 10:21:41.994523 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-20 10:21:42.064895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-20 10:21:42.064979 | orchestrator | 2025-09-20 10:21:42.064991 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-20 10:21:42.065001 | orchestrator | 2025-09-20 10:21:42.065011 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-20 10:21:42.124671 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:21:42.124721 | orchestrator | 2025-09-20 10:21:42.124733 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:21:42.124746 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-20 10:21:42.124758 | orchestrator | 2025-09-20 10:21:42.234127 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-20 10:21:42.234184 | orchestrator | + deactivate 2025-09-20 10:21:42.234193 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-20 10:21:42.234201 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-20 10:21:42.234207 | orchestrator | + export PATH 2025-09-20 10:21:42.234212 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-20 10:21:42.234218 | orchestrator | + '[' -n '' ']' 2025-09-20 10:21:42.234223 | orchestrator | + hash -r 2025-09-20 10:21:42.234250 | orchestrator | + '[' -n '' ']' 2025-09-20 10:21:42.234255 | orchestrator | + unset VIRTUAL_ENV 2025-09-20 10:21:42.234261 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-20 10:21:42.234266 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-20 10:21:42.234271 | orchestrator | + unset -f deactivate 2025-09-20 10:21:42.234278 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-20 10:21:42.241902 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-20 10:21:42.241922 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-20 10:21:42.241928 | orchestrator | + local max_attempts=60 2025-09-20 10:21:42.241933 | orchestrator | + local name=ceph-ansible 2025-09-20 10:21:42.241939 | orchestrator | + local attempt_num=1 2025-09-20 10:21:42.242777 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:21:42.282295 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:21:42.282336 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-20 10:21:42.282344 | orchestrator | + local max_attempts=60 2025-09-20 10:21:42.282351 | orchestrator | + local name=kolla-ansible 2025-09-20 10:21:42.282357 | orchestrator | + local attempt_num=1 2025-09-20 10:21:42.282942 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-20 10:21:42.321799 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:21:42.321832 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-20 10:21:42.321840 | orchestrator | + local max_attempts=60 2025-09-20 10:21:42.321847 | orchestrator | + local name=osism-ansible 2025-09-20 10:21:42.321854 | orchestrator | + local attempt_num=1 2025-09-20 10:21:42.322841 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-20 10:21:42.362808 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:21:42.362828 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-20 10:21:42.362832 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-20 10:21:43.170123 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-20 10:21:43.382737 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-20 10:21:43.382840 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.382852 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.382886 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-20 10:21:43.382896 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-20 10:21:43.382913 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.382922 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.382929 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-20 10:21:43.382937 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.382945 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-20 10:21:43.382953 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.382961 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-20 10:21:43.382995 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.383003 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-20 10:21:43.383010 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.383018 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-20 10:21:43.390920 | orchestrator | ++ semver latest 7.0.0 2025-09-20 10:21:43.451300 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 10:21:43.451377 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 10:21:43.451427 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-20 10:21:43.456648 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-20 10:21:55.700262 | orchestrator | 2025-09-20 10:21:55 | INFO  | Task 59ce0fb7-f5c3-4422-9c98-e355eb903391 (resolvconf) was prepared for execution. 2025-09-20 10:21:55.700349 | orchestrator | 2025-09-20 10:21:55 | INFO  | It takes a moment until task 59ce0fb7-f5c3-4422-9c98-e355eb903391 (resolvconf) has been started and output is visible here. 2025-09-20 10:22:10.457722 | orchestrator | 2025-09-20 10:22:10.457852 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-20 10:22:10.457882 | orchestrator | 2025-09-20 10:22:10.457894 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:22:10.457932 | orchestrator | Saturday 20 September 2025 10:21:59 +0000 (0:00:00.159) 0:00:00.159 **** 2025-09-20 10:22:10.457944 | orchestrator | ok: [testbed-manager] 2025-09-20 10:22:10.457956 | orchestrator | 2025-09-20 10:22:10.457967 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-20 10:22:10.458009 | orchestrator | Saturday 20 September 2025 10:22:04 +0000 (0:00:04.856) 0:00:05.015 **** 2025-09-20 10:22:10.458062 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:22:10.458087 | orchestrator | 2025-09-20 10:22:10.458098 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-20 10:22:10.458109 | orchestrator | Saturday 20 September 2025 10:22:04 +0000 (0:00:00.067) 0:00:05.083 **** 2025-09-20 10:22:10.458120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-20 10:22:10.458131 | orchestrator | 2025-09-20 10:22:10.458142 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-20 10:22:10.458152 | orchestrator | Saturday 20 September 2025 10:22:04 +0000 (0:00:00.088) 0:00:05.171 **** 2025-09-20 10:22:10.458163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 10:22:10.458174 | orchestrator | 2025-09-20 10:22:10.458185 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-20 10:22:10.458195 | orchestrator | Saturday 20 September 2025 10:22:04 +0000 (0:00:00.083) 0:00:05.254 **** 2025-09-20 10:22:10.458206 | orchestrator | ok: [testbed-manager] 2025-09-20 10:22:10.458217 | orchestrator | 2025-09-20 10:22:10.458228 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-20 10:22:10.458238 | orchestrator | Saturday 20 September 2025 10:22:05 +0000 (0:00:01.115) 0:00:06.370 **** 2025-09-20 10:22:10.458249 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:22:10.458260 | orchestrator | 2025-09-20 10:22:10.458270 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-20 10:22:10.458281 | orchestrator | Saturday 20 September 2025 10:22:05 +0000 (0:00:00.064) 0:00:06.435 **** 2025-09-20 10:22:10.458292 | orchestrator | ok: [testbed-manager] 2025-09-20 10:22:10.458302 | orchestrator | 2025-09-20 10:22:10.458313 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-20 10:22:10.458324 | orchestrator | Saturday 20 September 2025 10:22:06 +0000 (0:00:00.505) 0:00:06.941 **** 2025-09-20 10:22:10.458334 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:22:10.458345 | orchestrator | 2025-09-20 10:22:10.458356 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-20 10:22:10.458367 | orchestrator | Saturday 20 September 2025 10:22:06 +0000 (0:00:00.084) 0:00:07.025 **** 2025-09-20 10:22:10.458378 | orchestrator | changed: [testbed-manager] 2025-09-20 10:22:10.458389 | orchestrator | 2025-09-20 10:22:10.458399 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-20 10:22:10.458410 | orchestrator | Saturday 20 September 2025 10:22:07 +0000 (0:00:00.528) 0:00:07.554 **** 2025-09-20 10:22:10.458420 | orchestrator | changed: [testbed-manager] 2025-09-20 10:22:10.458431 | orchestrator | 2025-09-20 10:22:10.458442 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-20 10:22:10.458452 | orchestrator | Saturday 20 September 2025 10:22:08 +0000 (0:00:01.087) 0:00:08.641 **** 2025-09-20 10:22:10.458463 | orchestrator | ok: [testbed-manager] 2025-09-20 10:22:10.458474 | orchestrator | 2025-09-20 10:22:10.458484 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-20 10:22:10.458495 | orchestrator | Saturday 20 September 2025 10:22:09 +0000 (0:00:00.940) 0:00:09.582 **** 2025-09-20 10:22:10.458516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-20 10:22:10.458536 | orchestrator | 2025-09-20 10:22:10.458547 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-20 10:22:10.458557 | orchestrator | Saturday 20 September 2025 10:22:09 +0000 (0:00:00.088) 0:00:09.671 **** 2025-09-20 10:22:10.458568 | orchestrator | changed: [testbed-manager] 2025-09-20 10:22:10.458579 | orchestrator | 2025-09-20 10:22:10.458589 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:22:10.458621 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:22:10.458633 | orchestrator | 2025-09-20 10:22:10.458643 | orchestrator | 2025-09-20 10:22:10.458654 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:22:10.458665 | orchestrator | Saturday 20 September 2025 10:22:10 +0000 (0:00:01.087) 0:00:10.758 **** 2025-09-20 10:22:10.458675 | orchestrator | =============================================================================== 2025-09-20 10:22:10.458686 | orchestrator | Gathering Facts --------------------------------------------------------- 4.86s 2025-09-20 10:22:10.458697 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.12s 2025-09-20 10:22:10.458708 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2025-09-20 10:22:10.458718 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-09-20 10:22:10.458729 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-09-20 10:22:10.458740 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-09-20 10:22:10.458779 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2025-09-20 10:22:10.458792 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-20 10:22:10.458803 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-20 10:22:10.458814 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-20 10:22:10.458825 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-20 10:22:10.458835 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-20 10:22:10.458846 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-20 10:22:10.663071 | orchestrator | + osism apply sshconfig 2025-09-20 10:22:22.504170 | orchestrator | 2025-09-20 10:22:22 | INFO  | Task 6faacfec-6f63-4e00-a200-51d7f163d3c0 (sshconfig) was prepared for execution. 2025-09-20 10:22:22.504294 | orchestrator | 2025-09-20 10:22:22 | INFO  | It takes a moment until task 6faacfec-6f63-4e00-a200-51d7f163d3c0 (sshconfig) has been started and output is visible here. 2025-09-20 10:22:34.031945 | orchestrator | 2025-09-20 10:22:34.032097 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-20 10:22:34.032115 | orchestrator | 2025-09-20 10:22:34.032127 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-20 10:22:34.032139 | orchestrator | Saturday 20 September 2025 10:22:26 +0000 (0:00:00.166) 0:00:00.166 **** 2025-09-20 10:22:34.032151 | orchestrator | ok: [testbed-manager] 2025-09-20 10:22:34.032164 | orchestrator | 2025-09-20 10:22:34.032176 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-20 10:22:34.032187 | orchestrator | Saturday 20 September 2025 10:22:26 +0000 (0:00:00.592) 0:00:00.758 **** 2025-09-20 10:22:34.032199 | orchestrator | changed: [testbed-manager] 2025-09-20 10:22:34.032211 | orchestrator | 2025-09-20 10:22:34.032222 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-20 10:22:34.032234 | orchestrator | Saturday 20 September 2025 10:22:27 +0000 (0:00:00.510) 0:00:01.268 **** 2025-09-20 10:22:34.032244 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-20 10:22:34.032255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-20 10:22:34.032291 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-20 10:22:34.032303 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-20 10:22:34.032314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-20 10:22:34.032342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-20 10:22:34.032354 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-20 10:22:34.032365 | orchestrator | 2025-09-20 10:22:34.032375 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-20 10:22:34.032386 | orchestrator | Saturday 20 September 2025 10:22:33 +0000 (0:00:05.778) 0:00:07.047 **** 2025-09-20 10:22:34.032397 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:22:34.032407 | orchestrator | 2025-09-20 10:22:34.032417 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-20 10:22:34.032427 | orchestrator | Saturday 20 September 2025 10:22:33 +0000 (0:00:00.078) 0:00:07.125 **** 2025-09-20 10:22:34.032438 | orchestrator | changed: [testbed-manager] 2025-09-20 10:22:34.032449 | orchestrator | 2025-09-20 10:22:34.032460 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:22:34.032472 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:22:34.032483 | orchestrator | 2025-09-20 10:22:34.032493 | orchestrator | 2025-09-20 10:22:34.032504 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:22:34.032514 | orchestrator | Saturday 20 September 2025 10:22:33 +0000 (0:00:00.581) 0:00:07.707 **** 2025-09-20 10:22:34.032525 | orchestrator | =============================================================================== 2025-09-20 10:22:34.032535 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.78s 2025-09-20 10:22:34.032545 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-09-20 10:22:34.032556 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-09-20 10:22:34.032567 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-09-20 10:22:34.032577 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-20 10:22:34.328131 | orchestrator | + osism apply known-hosts 2025-09-20 10:22:46.309111 | orchestrator | 2025-09-20 10:22:46 | INFO  | Task 1d76a95d-789b-4624-9146-0abc707f9feb (known-hosts) was prepared for execution. 2025-09-20 10:22:46.309243 | orchestrator | 2025-09-20 10:22:46 | INFO  | It takes a moment until task 1d76a95d-789b-4624-9146-0abc707f9feb (known-hosts) has been started and output is visible here. 2025-09-20 10:23:03.336002 | orchestrator | 2025-09-20 10:23:03.336172 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-20 10:23:03.336190 | orchestrator | 2025-09-20 10:23:03.336202 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-20 10:23:03.336215 | orchestrator | Saturday 20 September 2025 10:22:50 +0000 (0:00:00.171) 0:00:00.171 **** 2025-09-20 10:23:03.336227 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-20 10:23:03.336239 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-20 10:23:03.336250 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-20 10:23:03.336261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-20 10:23:03.336272 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-20 10:23:03.336282 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-20 10:23:03.336293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-20 10:23:03.336304 | orchestrator | 2025-09-20 10:23:03.336315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-20 10:23:03.336327 | orchestrator | Saturday 20 September 2025 10:22:56 +0000 (0:00:06.153) 0:00:06.324 **** 2025-09-20 10:23:03.336391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-20 10:23:03.336407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-20 10:23:03.336418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-20 10:23:03.336429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-20 10:23:03.336440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-20 10:23:03.336462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-20 10:23:03.336476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-20 10:23:03.336489 | orchestrator | 2025-09-20 10:23:03.336501 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:03.336514 | orchestrator | Saturday 20 September 2025 10:22:56 +0000 (0:00:00.180) 0:00:06.504 **** 2025-09-20 10:23:03.336527 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhMDCDwCLYEYib0DylVHFFs2eX7tkuRAehRykV0OWeR) 2025-09-20 10:23:03.336546 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDh4+tggJZDfsmaQWNeFiAHJpSf+UgQrI3n0uYSyCrQ1jf/j0BGl8Z/v6dtBQmsue/Iz32uD/yvWeqNgfNyIJRxWlw6NJmyeVLwssE3IdistbIqPLKSVnh+yqDhaaY+dqklFBwhasm9ALNUUO1M3HuDmknmY/Cv+cZ0fmCuCZGb/j9s4pjanmvMxN+8MWJNbspzxwlZn8XUcSNnIM5Xhof+YC8ubMmBiMKJPGzx+YWntAOQvH+nwj14dHWjcDCY/YpExphCa7zUfyZcpDKssIAyNaY+6U92utFIuVYO0zIKguwQRCJjTZV41dGOM5l8XjyAHUVrrlFmH3nyr3xhCzDBxIv2AupaNl8z9q2aFbf91leNLvC0S0dTh9eXE34qBrjPlKpaSwWpYSpK2BCDHRxccniOlc3Tpsj38JAfUVevQZ3sg+gdWPszhNaDITJhNCe4x9Gss4t9M/ztKcuu8H+XKkacX8Qj79e3dY8Bs0EcQkxoIDVag+iGzb5a765oXKs=) 2025-09-20 10:23:03.336562 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD6f5xQYJcubjxXyTEzsn1n01OfmGNNB+1GULMhgOoiZ26UGhU9f6t6HJ/7+vJX1tuonYSmTgyrKbTSkL/ArgLo=) 2025-09-20 10:23:03.336590 | orchestrator | 2025-09-20 10:23:03.336603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:03.336616 | orchestrator | Saturday 20 September 2025 10:22:57 +0000 (0:00:01.243) 0:00:07.748 **** 2025-09-20 10:23:03.336650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvDD6TUIUpiBUFjKjulb0YnbEGYlH6U1+avR6OK0PvbCvTWZIG24ZnKF+rtJaJUYmSoudjqhwuHIuyX8Mgh8WSTt8p+c3igAOIbILEz2gPvycVXczhqGkuaw9JxgxrMK3nmvVOTp8g4UNG+m9rrCp4J4KRMM2x9kSIgXRasKpGVMMft0m8XgHvK5uB3ODab7pQiDA5SH0yTfEy4hwUka0+eEgUkRamwvbYldDNE91YplwuZ3qBJLfO+73hJPi2UAJvNy0rEAa//SrYUzROjVd4wSk/0PMrxicbC3FK0XAuezcBeSiZpMd/9YqZixqPQYIB6NSxQdHQ7W0B9UadQ088E5z3gPLw/RWzGV4nzkW+wNBzRLwhrL964owTrukrstoOY3TefPx+JXuODv+r6ImF1U4NJrcaG7/wpfs/KbAFqEhRJc3JtZSOVHFbOp1K/jM5SbgdjekO1OaNoOYY32388anNtHyeT2tx1rlP16PFYv8RIkqyf8dtSzs60/2vqzE=) 2025-09-20 10:23:03.336664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKAzVO+Y/0Km81ETYOp2pL9iEGgpMceuBMngHUd1B806fhin8NG9Dun3L94uwmC5/iAQI1JnvUJAqVIZMAvOLvQ=) 2025-09-20 10:23:03.336677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJLQzVcINn8KzbnWjVA5Vf1dfp3c6j//V9nXYP+3ObQ) 2025-09-20 10:23:03.336699 | orchestrator | 2025-09-20 10:23:03.336712 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:03.336724 | orchestrator | Saturday 20 September 2025 10:22:59 +0000 (0:00:01.114) 0:00:08.862 **** 2025-09-20 10:23:03.336737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLwKhxKetu0VGnqi2CBCgmSs2s98eJoMVvDG0KTLPctZNPOUe7e76x6hIH0YDej2tP3bEhnVGtKG2atSsAzPw10=) 2025-09-20 10:23:03.336750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmL3BZ8cHS4HFgYzwBGTbHeQCMrggDWsubcUP4X7z5Q1uoI8O7U6VR7z5y9YYSsTv7BQl+WftG0l2ulMyhVPciYLEARU+rOmUCrmz8HBtmx5l/NBlC1wxjNCxdBTep7JH9pLiN1MY2G5N0SIvfythfV0dRAtpRfflxgCsYvQ/0QP7nDziWEFgM3dELwkfubJS3xZAeO12rk56BrMg+DYJRVkvOMXDP9vLJXmrBuRP1bqL1UPVELQHWsU+9r6vMrXLvtS+6rFHulmAwkW5Po8uuI9ae+SQPguNHDzgvgRMQ4g0NEJg5iMYuNIWsHUBv+gerqJyTeg1Lx7yhMJAv3xfo8bx8sVVfR23WYpAmv1en/4VcPWf4TFtGcfckBiEENU02Djh98w1WZm36/1mlCzy75qKhSbA8Yc2UKcjb0w6/Qhg+r5y6v1/MG9l32fab2mbv8WQHcT7z2oRXJTR3Q8KczbeY+OgFguQUkEEzkRni9o5K+1KpXPnaqubJ76KmeZ0=) 2025-09-20 10:23:03.336763 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoAEqoECp3qrmoWNOaTcUp5KljI/Nq55LPWVEmBRSN2) 2025-09-20 10:23:03.336776 | orchestrator | 2025-09-20 10:23:03.336789 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:03.336802 | orchestrator | Saturday 20 September 2025 10:23:00 +0000 (0:00:01.080) 0:00:09.943 **** 2025-09-20 10:23:03.336893 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpOsEleaizlgmlTS6hAs3G1G2apFMEQuTqIyVJs4T0csjgohNgXV923tEz0Z6kLupxnhkW1nTMbLYz0OIEWCF3dncR/k0uJX39gysM/AMDq2jb54i/mdcIYc3uBTZCDdDvXfLc9tBuipxslOH4fRVSgQNLEKwfA4npX2r//59HnkrNq44r/hzma2tmBi1869fLRAUDUIPmxZsWnljZJ227M0g6J9dpOQhEl5V6YaGGhABDOdWvxD/cT4F43jAkUYztEg2ETG7RRSUZL1RRgjF67AtEHBtF5t+qtmqCuYkh1lBK4daQgPGQK7u5bfx8kw3h1NzquzzUbdewaglA5FrZ8HO4Ch3UZUHPyyZZvsZQsvJ4V2/T2U606wF4/HYALb7KOZkhPZrQyWtRaK4orVM42ea0j9CLpv2bBf8OVkaxuqJ0rWMFi+vKndwfYK6qwkSOnZGQKQrR/2Ung0sCA3HP5qPFhRdCV2x683+1fSXx9YIatuHF4WXc59IAfWQizes=) 2025-09-20 10:23:03.336907 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFjmdwkBM7JDe5GjRFrj3oHCRYhDt6pT10DC6lu50fx91s5zX1m7YrCJNDm7TkcroVMPPh+6OkyJCv0XORWuTVU=) 2025-09-20 10:23:03.336918 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPS5T5roI2pkoEzuob+bAa4pVwSGtE67Gdx2rx3HfWfZ) 2025-09-20 10:23:03.336929 | orchestrator | 2025-09-20 10:23:03.336940 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:03.336951 | orchestrator | Saturday 20 September 2025 10:23:01 +0000 (0:00:01.074) 0:00:11.018 **** 2025-09-20 10:23:03.336962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIENz8x+LylCfAshfUr+TJ9zaZfwJ2Pej510fV9Wja2b3) 2025-09-20 10:23:03.336973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs0zk8WM/jvCRbvHR7q9jtGHNSNv1HMmrCoqwp6GwWQvKH0GnXij6jrhQENIPMLg5y8qVT4KW/LQzVwtdt6cql1LY61gVXgOQhzD1uKR5Q4Ay+65A7dMFjmZBeXdJwqduwyf2TpaRHPJ0VhVZKIE/d2NTOgauXktwUECfpGhde0FQQT2zJSdcf3VELZzcu+cp0zeLRJdPfr6oFvGRjxY2DXXQni47KB9jGdciLJAfXLbuqe3SIbEElSrvmMyd30Vc4T40Q1sEaRorzZwfENTZke5i8VPUMLrw19g7+oZj22FIWPdmogiqdqp1TBEPV5euqM6Pq2VmwUX732VeXDH9Xnkj069HqGp6XJmLLmODcfamMcZbfNTvDOaGkBMruz2dGLlNEblBZR1KxEdbs9LIaGy+6j8Pfl/css5+nM3vn9j4znn5VuRnZ1GQms3GPTHbahTLjPcR7oyg165y855EMYaGZFaIP+vRAh09F6p4MSd7OldsguZxynKxla+wXN7E=) 2025-09-20 10:23:03.336985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDRRj32nRPA1XYZR7bd1FSD2VYvqyTBQJxD1+7NanQx2MgxhHyXrynU3Xs1nKC/ACpVFDUuCVimSmCHpW7+w0h8=) 2025-09-20 10:23:03.337003 | orchestrator | 2025-09-20 10:23:03.337015 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:03.337026 | orchestrator | Saturday 20 September 2025 10:23:02 +0000 (0:00:01.078) 0:00:12.096 **** 2025-09-20 10:23:03.337064 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGP7TWBXs1rA8H48JzsGwUzAsm1BwCtF/x5tKE+OSQiEkzm5H+Mzz4h96F8lCOl2MoaviWKbVMQYfY5BBGuoBoW97WShjS+/Jkl4QJvYR28OQvAZ6edIvB2+h5JHvf36u0vKj2Od+2O46K5DpCuXeEusmXSoThPVOOacRjlijwXvBoaRxalE49kOzj+Wsu42MjfpkjJogh8Zk7/Hso2id/p3GpUpJcAYoWafD8c/g0Ob0JzWIemnya6Q8Jw+XhNwYA5G5b2ojsLDuIgRIaqKI9DBEFejC3NqejI8kuU+JQlDHrO3yDv9x6QQr509dYV3BwfxyHS2zK5954BmXw9OPv5uIBy2tequVm9u733lE6eHHtPc7hWqfKs6PUhHncIjya7tNgUBPy5vhmWSaL1hqiJbIIYU5GColSsrFs+JBXINlSlM/2/dfYIxmoa955qeGXh7PrsEBfXRgK6liiyX1jhPRVlj5btvLzSFp3VajAOCnR4lQAY/3MOXy1RuJWhxc=) 2025-09-20 10:23:14.375586 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHOI01SeBayS/9lUz83QiEqsp2FlRpf+ZpbAKogBYGuk2VDyyfdzzXvPbMfvDvdGDpMl0Vd75davIj3lJ2wcRQU=) 2025-09-20 10:23:14.375705 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfZ+4I8ni3YDngb+oTurmjVuMMxBibL/vAYITAEN/cv) 2025-09-20 10:23:14.375724 | orchestrator | 2025-09-20 10:23:14.375737 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:14.375751 | orchestrator | Saturday 20 September 2025 10:23:03 +0000 (0:00:01.042) 0:00:13.139 **** 2025-09-20 10:23:14.375764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXG4dlSj3dn8f5pquTHfD4rPElyYvea0VwrCgmtu0IIqYaYggGUWbSMeiBqI4ALvoAAAE6eOg4v0l5+V8wCt1ML97TZ8RJx9e3otY0G+0mL9D/Q8lROpnVXV7S2YgTOPSv44W1J3ZP5kMVRx+fbsiy0fRbXCy4tYooL6HYMSawvUs1567tSdfNEapyBQ5EP58+6ljDZAEa1wGMLSIFZIPjpJcSXytt157OZuoohIOts4yCLSfOK530GkcFyrwZ9ladsW+IoxckNi1qoKKauNTs7LP3CNC6J+nI6aarqNxeUmaWh59ogT5oit2LOe1ba9dx7c8t4SVQ5VRhzhm5S5iAZJC2s4KBAEESpRtmxce5c6lw0iFfsPTdprJStQNw+OmRrcHEkXagmBBINFBtlOYuyXDudV6BcTJJ4bFDnzRNTV1A32/LvQ1j7iioRXv3U8DQZreYUSewgTlNmGsL3Z/t8yqcfdUf0xOMhPtgtCf3hee7XPBU+KqAb63IM2u7cvc=) 2025-09-20 10:23:14.375779 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBdD1oPtMwMe/VaJaHNS9MDXDzk9lFKt5HHESXJwp4R1LMQb59Li2ARs71uQc+8XtMhrJqnLfTWMs957C+x8y+g=) 2025-09-20 10:23:14.375790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDbeqV6F8BM/hui70x0RBO/+lufXNbwAB2LmjsnpXEKs) 2025-09-20 10:23:14.375801 | orchestrator | 2025-09-20 10:23:14.375813 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-20 10:23:14.375824 | orchestrator | Saturday 20 September 2025 10:23:04 +0000 (0:00:01.085) 0:00:14.224 **** 2025-09-20 10:23:14.375836 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-20 10:23:14.375847 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-20 10:23:14.375858 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-20 10:23:14.375869 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-20 10:23:14.375879 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-20 10:23:14.375890 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-20 10:23:14.375900 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-20 10:23:14.375911 | orchestrator | 2025-09-20 10:23:14.375922 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-20 10:23:14.375952 | orchestrator | Saturday 20 September 2025 10:23:09 +0000 (0:00:05.361) 0:00:19.585 **** 2025-09-20 10:23:14.375965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-20 10:23:14.375977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-20 10:23:14.376017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-20 10:23:14.376029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-20 10:23:14.376077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-20 10:23:14.376089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-20 10:23:14.376100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-20 10:23:14.376111 | orchestrator | 2025-09-20 10:23:14.376125 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:14.376137 | orchestrator | Saturday 20 September 2025 10:23:09 +0000 (0:00:00.159) 0:00:19.745 **** 2025-09-20 10:23:14.376149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD6f5xQYJcubjxXyTEzsn1n01OfmGNNB+1GULMhgOoiZ26UGhU9f6t6HJ/7+vJX1tuonYSmTgyrKbTSkL/ArgLo=) 2025-09-20 10:23:14.376183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDh4+tggJZDfsmaQWNeFiAHJpSf+UgQrI3n0uYSyCrQ1jf/j0BGl8Z/v6dtBQmsue/Iz32uD/yvWeqNgfNyIJRxWlw6NJmyeVLwssE3IdistbIqPLKSVnh+yqDhaaY+dqklFBwhasm9ALNUUO1M3HuDmknmY/Cv+cZ0fmCuCZGb/j9s4pjanmvMxN+8MWJNbspzxwlZn8XUcSNnIM5Xhof+YC8ubMmBiMKJPGzx+YWntAOQvH+nwj14dHWjcDCY/YpExphCa7zUfyZcpDKssIAyNaY+6U92utFIuVYO0zIKguwQRCJjTZV41dGOM5l8XjyAHUVrrlFmH3nyr3xhCzDBxIv2AupaNl8z9q2aFbf91leNLvC0S0dTh9eXE34qBrjPlKpaSwWpYSpK2BCDHRxccniOlc3Tpsj38JAfUVevQZ3sg+gdWPszhNaDITJhNCe4x9Gss4t9M/ztKcuu8H+XKkacX8Qj79e3dY8Bs0EcQkxoIDVag+iGzb5a765oXKs=) 2025-09-20 10:23:14.376197 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhMDCDwCLYEYib0DylVHFFs2eX7tkuRAehRykV0OWeR) 2025-09-20 10:23:14.376210 | orchestrator | 2025-09-20 10:23:14.376222 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:14.376234 | orchestrator | Saturday 20 September 2025 10:23:11 +0000 (0:00:01.111) 0:00:20.856 **** 2025-09-20 10:23:14.376247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJLQzVcINn8KzbnWjVA5Vf1dfp3c6j//V9nXYP+3ObQ) 2025-09-20 10:23:14.376260 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvDD6TUIUpiBUFjKjulb0YnbEGYlH6U1+avR6OK0PvbCvTWZIG24ZnKF+rtJaJUYmSoudjqhwuHIuyX8Mgh8WSTt8p+c3igAOIbILEz2gPvycVXczhqGkuaw9JxgxrMK3nmvVOTp8g4UNG+m9rrCp4J4KRMM2x9kSIgXRasKpGVMMft0m8XgHvK5uB3ODab7pQiDA5SH0yTfEy4hwUka0+eEgUkRamwvbYldDNE91YplwuZ3qBJLfO+73hJPi2UAJvNy0rEAa//SrYUzROjVd4wSk/0PMrxicbC3FK0XAuezcBeSiZpMd/9YqZixqPQYIB6NSxQdHQ7W0B9UadQ088E5z3gPLw/RWzGV4nzkW+wNBzRLwhrL964owTrukrstoOY3TefPx+JXuODv+r6ImF1U4NJrcaG7/wpfs/KbAFqEhRJc3JtZSOVHFbOp1K/jM5SbgdjekO1OaNoOYY32388anNtHyeT2tx1rlP16PFYv8RIkqyf8dtSzs60/2vqzE=) 2025-09-20 10:23:14.376273 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKAzVO+Y/0Km81ETYOp2pL9iEGgpMceuBMngHUd1B806fhin8NG9Dun3L94uwmC5/iAQI1JnvUJAqVIZMAvOLvQ=) 2025-09-20 10:23:14.376285 | orchestrator | 2025-09-20 10:23:14.376297 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:14.376310 | orchestrator | Saturday 20 September 2025 10:23:12 +0000 (0:00:01.153) 0:00:22.009 **** 2025-09-20 10:23:14.376332 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmL3BZ8cHS4HFgYzwBGTbHeQCMrggDWsubcUP4X7z5Q1uoI8O7U6VR7z5y9YYSsTv7BQl+WftG0l2ulMyhVPciYLEARU+rOmUCrmz8HBtmx5l/NBlC1wxjNCxdBTep7JH9pLiN1MY2G5N0SIvfythfV0dRAtpRfflxgCsYvQ/0QP7nDziWEFgM3dELwkfubJS3xZAeO12rk56BrMg+DYJRVkvOMXDP9vLJXmrBuRP1bqL1UPVELQHWsU+9r6vMrXLvtS+6rFHulmAwkW5Po8uuI9ae+SQPguNHDzgvgRMQ4g0NEJg5iMYuNIWsHUBv+gerqJyTeg1Lx7yhMJAv3xfo8bx8sVVfR23WYpAmv1en/4VcPWf4TFtGcfckBiEENU02Djh98w1WZm36/1mlCzy75qKhSbA8Yc2UKcjb0w6/Qhg+r5y6v1/MG9l32fab2mbv8WQHcT7z2oRXJTR3Q8KczbeY+OgFguQUkEEzkRni9o5K+1KpXPnaqubJ76KmeZ0=) 2025-09-20 10:23:14.376345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLwKhxKetu0VGnqi2CBCgmSs2s98eJoMVvDG0KTLPctZNPOUe7e76x6hIH0YDej2tP3bEhnVGtKG2atSsAzPw10=) 2025-09-20 10:23:14.376357 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoAEqoECp3qrmoWNOaTcUp5KljI/Nq55LPWVEmBRSN2) 2025-09-20 10:23:14.376369 | orchestrator | 2025-09-20 10:23:14.376381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:14.376394 | orchestrator | Saturday 20 September 2025 10:23:13 +0000 (0:00:01.095) 0:00:23.104 **** 2025-09-20 10:23:14.376407 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPS5T5roI2pkoEzuob+bAa4pVwSGtE67Gdx2rx3HfWfZ) 2025-09-20 10:23:14.376425 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpOsEleaizlgmlTS6hAs3G1G2apFMEQuTqIyVJs4T0csjgohNgXV923tEz0Z6kLupxnhkW1nTMbLYz0OIEWCF3dncR/k0uJX39gysM/AMDq2jb54i/mdcIYc3uBTZCDdDvXfLc9tBuipxslOH4fRVSgQNLEKwfA4npX2r//59HnkrNq44r/hzma2tmBi1869fLRAUDUIPmxZsWnljZJ227M0g6J9dpOQhEl5V6YaGGhABDOdWvxD/cT4F43jAkUYztEg2ETG7RRSUZL1RRgjF67AtEHBtF5t+qtmqCuYkh1lBK4daQgPGQK7u5bfx8kw3h1NzquzzUbdewaglA5FrZ8HO4Ch3UZUHPyyZZvsZQsvJ4V2/T2U606wF4/HYALb7KOZkhPZrQyWtRaK4orVM42ea0j9CLpv2bBf8OVkaxuqJ0rWMFi+vKndwfYK6qwkSOnZGQKQrR/2Ung0sCA3HP5qPFhRdCV2x683+1fSXx9YIatuHF4WXc59IAfWQizes=) 2025-09-20 10:23:14.376501 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFjmdwkBM7JDe5GjRFrj3oHCRYhDt6pT10DC6lu50fx91s5zX1m7YrCJNDm7TkcroVMPPh+6OkyJCv0XORWuTVU=) 2025-09-20 10:23:18.717932 | orchestrator | 2025-09-20 10:23:18.718135 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:18.718156 | orchestrator | Saturday 20 September 2025 10:23:14 +0000 (0:00:01.076) 0:00:24.181 **** 2025-09-20 10:23:18.718170 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs0zk8WM/jvCRbvHR7q9jtGHNSNv1HMmrCoqwp6GwWQvKH0GnXij6jrhQENIPMLg5y8qVT4KW/LQzVwtdt6cql1LY61gVXgOQhzD1uKR5Q4Ay+65A7dMFjmZBeXdJwqduwyf2TpaRHPJ0VhVZKIE/d2NTOgauXktwUECfpGhde0FQQT2zJSdcf3VELZzcu+cp0zeLRJdPfr6oFvGRjxY2DXXQni47KB9jGdciLJAfXLbuqe3SIbEElSrvmMyd30Vc4T40Q1sEaRorzZwfENTZke5i8VPUMLrw19g7+oZj22FIWPdmogiqdqp1TBEPV5euqM6Pq2VmwUX732VeXDH9Xnkj069HqGp6XJmLLmODcfamMcZbfNTvDOaGkBMruz2dGLlNEblBZR1KxEdbs9LIaGy+6j8Pfl/css5+nM3vn9j4znn5VuRnZ1GQms3GPTHbahTLjPcR7oyg165y855EMYaGZFaIP+vRAh09F6p4MSd7OldsguZxynKxla+wXN7E=) 2025-09-20 10:23:18.718185 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDRRj32nRPA1XYZR7bd1FSD2VYvqyTBQJxD1+7NanQx2MgxhHyXrynU3Xs1nKC/ACpVFDUuCVimSmCHpW7+w0h8=) 2025-09-20 10:23:18.718199 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIENz8x+LylCfAshfUr+TJ9zaZfwJ2Pej510fV9Wja2b3) 2025-09-20 10:23:18.718211 | orchestrator | 2025-09-20 10:23:18.718222 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:18.718233 | orchestrator | Saturday 20 September 2025 10:23:15 +0000 (0:00:01.125) 0:00:25.306 **** 2025-09-20 10:23:18.718244 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfZ+4I8ni3YDngb+oTurmjVuMMxBibL/vAYITAEN/cv) 2025-09-20 10:23:18.718280 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGP7TWBXs1rA8H48JzsGwUzAsm1BwCtF/x5tKE+OSQiEkzm5H+Mzz4h96F8lCOl2MoaviWKbVMQYfY5BBGuoBoW97WShjS+/Jkl4QJvYR28OQvAZ6edIvB2+h5JHvf36u0vKj2Od+2O46K5DpCuXeEusmXSoThPVOOacRjlijwXvBoaRxalE49kOzj+Wsu42MjfpkjJogh8Zk7/Hso2id/p3GpUpJcAYoWafD8c/g0Ob0JzWIemnya6Q8Jw+XhNwYA5G5b2ojsLDuIgRIaqKI9DBEFejC3NqejI8kuU+JQlDHrO3yDv9x6QQr509dYV3BwfxyHS2zK5954BmXw9OPv5uIBy2tequVm9u733lE6eHHtPc7hWqfKs6PUhHncIjya7tNgUBPy5vhmWSaL1hqiJbIIYU5GColSsrFs+JBXINlSlM/2/dfYIxmoa955qeGXh7PrsEBfXRgK6liiyX1jhPRVlj5btvLzSFp3VajAOCnR4lQAY/3MOXy1RuJWhxc=) 2025-09-20 10:23:18.718292 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHOI01SeBayS/9lUz83QiEqsp2FlRpf+ZpbAKogBYGuk2VDyyfdzzXvPbMfvDvdGDpMl0Vd75davIj3lJ2wcRQU=) 2025-09-20 10:23:18.718302 | orchestrator | 2025-09-20 10:23:18.718313 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 10:23:18.718324 | orchestrator | Saturday 20 September 2025 10:23:16 +0000 (0:00:01.067) 0:00:26.374 **** 2025-09-20 10:23:18.718335 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBdD1oPtMwMe/VaJaHNS9MDXDzk9lFKt5HHESXJwp4R1LMQb59Li2ARs71uQc+8XtMhrJqnLfTWMs957C+x8y+g=) 2025-09-20 10:23:18.718347 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXG4dlSj3dn8f5pquTHfD4rPElyYvea0VwrCgmtu0IIqYaYggGUWbSMeiBqI4ALvoAAAE6eOg4v0l5+V8wCt1ML97TZ8RJx9e3otY0G+0mL9D/Q8lROpnVXV7S2YgTOPSv44W1J3ZP5kMVRx+fbsiy0fRbXCy4tYooL6HYMSawvUs1567tSdfNEapyBQ5EP58+6ljDZAEa1wGMLSIFZIPjpJcSXytt157OZuoohIOts4yCLSfOK530GkcFyrwZ9ladsW+IoxckNi1qoKKauNTs7LP3CNC6J+nI6aarqNxeUmaWh59ogT5oit2LOe1ba9dx7c8t4SVQ5VRhzhm5S5iAZJC2s4KBAEESpRtmxce5c6lw0iFfsPTdprJStQNw+OmRrcHEkXagmBBINFBtlOYuyXDudV6BcTJJ4bFDnzRNTV1A32/LvQ1j7iioRXv3U8DQZreYUSewgTlNmGsL3Z/t8yqcfdUf0xOMhPtgtCf3hee7XPBU+KqAb63IM2u7cvc=) 2025-09-20 10:23:18.718358 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDbeqV6F8BM/hui70x0RBO/+lufXNbwAB2LmjsnpXEKs) 2025-09-20 10:23:18.718368 | orchestrator | 2025-09-20 10:23:18.718379 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-20 10:23:18.718390 | orchestrator | Saturday 20 September 2025 10:23:17 +0000 (0:00:01.078) 0:00:27.452 **** 2025-09-20 10:23:18.718401 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-20 10:23:18.718464 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-20 10:23:18.718478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-20 10:23:18.718491 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-20 10:23:18.718503 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-20 10:23:18.718515 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-20 10:23:18.718528 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-20 10:23:18.718542 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:23:18.718555 | orchestrator | 2025-09-20 10:23:18.718586 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-20 10:23:18.718599 | orchestrator | Saturday 20 September 2025 10:23:17 +0000 (0:00:00.167) 0:00:27.620 **** 2025-09-20 10:23:18.718611 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:23:18.718623 | orchestrator | 2025-09-20 10:23:18.718635 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-20 10:23:18.718648 | orchestrator | Saturday 20 September 2025 10:23:17 +0000 (0:00:00.069) 0:00:27.689 **** 2025-09-20 10:23:18.718660 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:23:18.718673 | orchestrator | 2025-09-20 10:23:18.718685 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-20 10:23:18.718697 | orchestrator | Saturday 20 September 2025 10:23:17 +0000 (0:00:00.065) 0:00:27.755 **** 2025-09-20 10:23:18.718718 | orchestrator | changed: [testbed-manager] 2025-09-20 10:23:18.718730 | orchestrator | 2025-09-20 10:23:18.718742 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:23:18.718755 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:23:18.718768 | orchestrator | 2025-09-20 10:23:18.718780 | orchestrator | 2025-09-20 10:23:18.718793 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:23:18.718806 | orchestrator | Saturday 20 September 2025 10:23:18 +0000 (0:00:00.521) 0:00:28.276 **** 2025-09-20 10:23:18.718818 | orchestrator | =============================================================================== 2025-09-20 10:23:18.718829 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.15s 2025-09-20 10:23:18.718840 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.36s 2025-09-20 10:23:18.718852 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-09-20 10:23:18.718862 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-09-20 10:23:18.718890 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-20 10:23:18.718901 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-20 10:23:18.718912 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-20 10:23:18.718922 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-20 10:23:18.718933 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-20 10:23:18.718944 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-20 10:23:18.718954 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-20 10:23:18.718965 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-20 10:23:18.718975 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-20 10:23:18.718986 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-20 10:23:18.718997 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-20 10:23:18.719007 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-20 10:23:18.719018 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-09-20 10:23:18.719028 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-09-20 10:23:18.719039 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-20 10:23:18.719081 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-20 10:23:19.057597 | orchestrator | + osism apply squid 2025-09-20 10:23:31.138673 | orchestrator | 2025-09-20 10:23:31 | INFO  | Task 370c3a2d-8e6d-4bfb-b3bc-a08a3fe072b2 (squid) was prepared for execution. 2025-09-20 10:23:31.138826 | orchestrator | 2025-09-20 10:23:31 | INFO  | It takes a moment until task 370c3a2d-8e6d-4bfb-b3bc-a08a3fe072b2 (squid) has been started and output is visible here. 2025-09-20 10:25:24.392347 | orchestrator | 2025-09-20 10:25:24.392470 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-20 10:25:24.392487 | orchestrator | 2025-09-20 10:25:24.392499 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-20 10:25:24.392511 | orchestrator | Saturday 20 September 2025 10:23:34 +0000 (0:00:00.157) 0:00:00.157 **** 2025-09-20 10:25:24.392523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 10:25:24.392535 | orchestrator | 2025-09-20 10:25:24.392547 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-20 10:25:24.392583 | orchestrator | Saturday 20 September 2025 10:23:34 +0000 (0:00:00.083) 0:00:00.241 **** 2025-09-20 10:25:24.392594 | orchestrator | ok: [testbed-manager] 2025-09-20 10:25:24.392607 | orchestrator | 2025-09-20 10:25:24.392617 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-20 10:25:24.392628 | orchestrator | Saturday 20 September 2025 10:23:36 +0000 (0:00:01.196) 0:00:01.437 **** 2025-09-20 10:25:24.392639 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-20 10:25:24.392650 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-20 10:25:24.392660 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-20 10:25:24.392671 | orchestrator | 2025-09-20 10:25:24.392682 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-20 10:25:24.392693 | orchestrator | Saturday 20 September 2025 10:23:37 +0000 (0:00:01.080) 0:00:02.518 **** 2025-09-20 10:25:24.392703 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-20 10:25:24.392714 | orchestrator | 2025-09-20 10:25:24.392725 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-20 10:25:24.392736 | orchestrator | Saturday 20 September 2025 10:23:38 +0000 (0:00:00.962) 0:00:03.481 **** 2025-09-20 10:25:24.392746 | orchestrator | ok: [testbed-manager] 2025-09-20 10:25:24.392757 | orchestrator | 2025-09-20 10:25:24.392767 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-20 10:25:24.392778 | orchestrator | Saturday 20 September 2025 10:23:38 +0000 (0:00:00.322) 0:00:03.804 **** 2025-09-20 10:25:24.392789 | orchestrator | changed: [testbed-manager] 2025-09-20 10:25:24.392800 | orchestrator | 2025-09-20 10:25:24.392810 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-20 10:25:24.392821 | orchestrator | Saturday 20 September 2025 10:23:39 +0000 (0:00:00.832) 0:00:04.636 **** 2025-09-20 10:25:24.392831 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-20 10:25:24.392844 | orchestrator | ok: [testbed-manager] 2025-09-20 10:25:24.392856 | orchestrator | 2025-09-20 10:25:24.392868 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-20 10:25:24.392880 | orchestrator | Saturday 20 September 2025 10:24:10 +0000 (0:00:31.604) 0:00:36.241 **** 2025-09-20 10:25:24.392891 | orchestrator | changed: [testbed-manager] 2025-09-20 10:25:24.392904 | orchestrator | 2025-09-20 10:25:24.392915 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-20 10:25:24.392927 | orchestrator | Saturday 20 September 2025 10:24:23 +0000 (0:00:12.508) 0:00:48.749 **** 2025-09-20 10:25:24.392940 | orchestrator | Pausing for 60 seconds 2025-09-20 10:25:24.392953 | orchestrator | changed: [testbed-manager] 2025-09-20 10:25:24.392965 | orchestrator | 2025-09-20 10:25:24.392977 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-20 10:25:24.392989 | orchestrator | Saturday 20 September 2025 10:25:23 +0000 (0:01:00.069) 0:01:48.819 **** 2025-09-20 10:25:24.393001 | orchestrator | ok: [testbed-manager] 2025-09-20 10:25:24.393013 | orchestrator | 2025-09-20 10:25:24.393026 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-20 10:25:24.393037 | orchestrator | Saturday 20 September 2025 10:25:23 +0000 (0:00:00.062) 0:01:48.882 **** 2025-09-20 10:25:24.393049 | orchestrator | changed: [testbed-manager] 2025-09-20 10:25:24.393061 | orchestrator | 2025-09-20 10:25:24.393073 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:25:24.393120 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:25:24.393133 | orchestrator | 2025-09-20 10:25:24.393145 | orchestrator | 2025-09-20 10:25:24.393157 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:25:24.393169 | orchestrator | Saturday 20 September 2025 10:25:24 +0000 (0:00:00.662) 0:01:49.544 **** 2025-09-20 10:25:24.393189 | orchestrator | =============================================================================== 2025-09-20 10:25:24.393200 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-20 10:25:24.393211 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.60s 2025-09-20 10:25:24.393221 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.51s 2025-09-20 10:25:24.393232 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.20s 2025-09-20 10:25:24.393243 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.08s 2025-09-20 10:25:24.393253 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.96s 2025-09-20 10:25:24.393265 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2025-09-20 10:25:24.393275 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-09-20 10:25:24.393286 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2025-09-20 10:25:24.393296 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-20 10:25:24.393307 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-20 10:25:24.695966 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:25:24.696440 | orchestrator | ++ semver latest 9.0.0 2025-09-20 10:25:24.744570 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-20 10:25:24.744645 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:25:24.745514 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-20 10:25:36.815146 | orchestrator | 2025-09-20 10:25:36 | INFO  | Task 5c577813-462a-49af-9a89-38c221eebd83 (operator) was prepared for execution. 2025-09-20 10:25:36.815264 | orchestrator | 2025-09-20 10:25:36 | INFO  | It takes a moment until task 5c577813-462a-49af-9a89-38c221eebd83 (operator) has been started and output is visible here. 2025-09-20 10:25:52.262230 | orchestrator | 2025-09-20 10:25:52.262332 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-20 10:25:52.262345 | orchestrator | 2025-09-20 10:25:52.262355 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:25:52.262364 | orchestrator | Saturday 20 September 2025 10:25:40 +0000 (0:00:00.151) 0:00:00.151 **** 2025-09-20 10:25:52.262373 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:25:52.262383 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:25:52.262392 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:25:52.262400 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:25:52.262408 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:25:52.262430 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:25:52.262438 | orchestrator | 2025-09-20 10:25:52.262446 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-20 10:25:52.262454 | orchestrator | Saturday 20 September 2025 10:25:43 +0000 (0:00:03.191) 0:00:03.342 **** 2025-09-20 10:25:52.262462 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:25:52.262470 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:25:52.262478 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:25:52.262486 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:25:52.262494 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:25:52.262502 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:25:52.262509 | orchestrator | 2025-09-20 10:25:52.262517 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-20 10:25:52.262525 | orchestrator | 2025-09-20 10:25:52.262533 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-20 10:25:52.262541 | orchestrator | Saturday 20 September 2025 10:25:44 +0000 (0:00:00.744) 0:00:04.087 **** 2025-09-20 10:25:52.262549 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:25:52.262557 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:25:52.262565 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:25:52.262572 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:25:52.262580 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:25:52.262588 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:25:52.262616 | orchestrator | 2025-09-20 10:25:52.262624 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-20 10:25:52.262633 | orchestrator | Saturday 20 September 2025 10:25:44 +0000 (0:00:00.170) 0:00:04.258 **** 2025-09-20 10:25:52.262640 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:25:52.262648 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:25:52.262656 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:25:52.262663 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:25:52.262671 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:25:52.262678 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:25:52.262686 | orchestrator | 2025-09-20 10:25:52.262694 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-20 10:25:52.262702 | orchestrator | Saturday 20 September 2025 10:25:45 +0000 (0:00:00.164) 0:00:04.422 **** 2025-09-20 10:25:52.262710 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:25:52.262718 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:25:52.262726 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:25:52.262734 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:25:52.262741 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:25:52.262750 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:25:52.262757 | orchestrator | 2025-09-20 10:25:52.262765 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-20 10:25:52.262773 | orchestrator | Saturday 20 September 2025 10:25:45 +0000 (0:00:00.602) 0:00:05.025 **** 2025-09-20 10:25:52.262783 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:25:52.262791 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:25:52.262800 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:25:52.262809 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:25:52.262818 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:25:52.262826 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:25:52.262835 | orchestrator | 2025-09-20 10:25:52.262844 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-20 10:25:52.262852 | orchestrator | Saturday 20 September 2025 10:25:46 +0000 (0:00:00.796) 0:00:05.821 **** 2025-09-20 10:25:52.262861 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-20 10:25:52.262870 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-20 10:25:52.262879 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-20 10:25:52.262888 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-20 10:25:52.262897 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-20 10:25:52.262906 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-20 10:25:52.262915 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-20 10:25:52.262923 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-20 10:25:52.262932 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-20 10:25:52.262941 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-20 10:25:52.262949 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-20 10:25:52.262958 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-20 10:25:52.262967 | orchestrator | 2025-09-20 10:25:52.262975 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-20 10:25:52.262988 | orchestrator | Saturday 20 September 2025 10:25:47 +0000 (0:00:01.123) 0:00:06.945 **** 2025-09-20 10:25:52.262997 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:25:52.263006 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:25:52.263015 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:25:52.263024 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:25:52.263033 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:25:52.263042 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:25:52.263051 | orchestrator | 2025-09-20 10:25:52.263060 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-20 10:25:52.263091 | orchestrator | Saturday 20 September 2025 10:25:48 +0000 (0:00:01.271) 0:00:08.216 **** 2025-09-20 10:25:52.263101 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-20 10:25:52.263117 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-20 10:25:52.263126 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-20 10:25:52.263135 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:25:52.263161 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:25:52.263170 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:25:52.263177 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:25:52.263185 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:25:52.263193 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 10:25:52.263201 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-20 10:25:52.263209 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-20 10:25:52.263217 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-20 10:25:52.263225 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-20 10:25:52.263233 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-20 10:25:52.263241 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-20 10:25:52.263248 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:25:52.263256 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:25:52.263264 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:25:52.263272 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:25:52.263280 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:25:52.263288 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-20 10:25:52.263295 | orchestrator | 2025-09-20 10:25:52.263303 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-20 10:25:52.263312 | orchestrator | Saturday 20 September 2025 10:25:50 +0000 (0:00:01.234) 0:00:09.450 **** 2025-09-20 10:25:52.263320 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:25:52.263328 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:25:52.263336 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:25:52.263343 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:25:52.263351 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:25:52.263359 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:25:52.263367 | orchestrator | 2025-09-20 10:25:52.263375 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-20 10:25:52.263383 | orchestrator | Saturday 20 September 2025 10:25:50 +0000 (0:00:00.188) 0:00:09.639 **** 2025-09-20 10:25:52.263391 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:25:52.263399 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:25:52.263406 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:25:52.263414 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:25:52.263422 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:25:52.263430 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:25:52.263437 | orchestrator | 2025-09-20 10:25:52.263445 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-20 10:25:52.263453 | orchestrator | Saturday 20 September 2025 10:25:50 +0000 (0:00:00.566) 0:00:10.206 **** 2025-09-20 10:25:52.263461 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:25:52.263469 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:25:52.263477 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:25:52.263485 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:25:52.263493 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:25:52.263501 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:25:52.263508 | orchestrator | 2025-09-20 10:25:52.263521 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-20 10:25:52.263530 | orchestrator | Saturday 20 September 2025 10:25:51 +0000 (0:00:00.219) 0:00:10.425 **** 2025-09-20 10:25:52.263537 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:25:52.263550 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-20 10:25:52.263558 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:25:52.263566 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:25:52.263575 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:25:52.263583 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:25:52.263591 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:25:52.263599 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:25:52.263607 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:25:52.263616 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:25:52.263624 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-20 10:25:52.263632 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:25:52.263640 | orchestrator | 2025-09-20 10:25:52.263649 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-20 10:25:52.263657 | orchestrator | Saturday 20 September 2025 10:25:51 +0000 (0:00:00.704) 0:00:11.130 **** 2025-09-20 10:25:52.263665 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:25:52.263673 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:25:52.263682 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:25:52.263690 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:25:52.263698 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:25:52.263706 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:25:52.263715 | orchestrator | 2025-09-20 10:25:52.263723 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-20 10:25:52.263731 | orchestrator | Saturday 20 September 2025 10:25:51 +0000 (0:00:00.176) 0:00:11.306 **** 2025-09-20 10:25:52.263740 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:25:52.263748 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:25:52.263756 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:25:52.263764 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:25:52.263777 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:25:52.263786 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:25:52.263794 | orchestrator | 2025-09-20 10:25:52.263803 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-20 10:25:52.263811 | orchestrator | Saturday 20 September 2025 10:25:52 +0000 (0:00:00.136) 0:00:11.443 **** 2025-09-20 10:25:52.263819 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:25:52.263828 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:25:52.263836 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:25:52.263845 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:25:52.263859 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:25:53.380905 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:25:53.381023 | orchestrator | 2025-09-20 10:25:53.381037 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-20 10:25:53.381050 | orchestrator | Saturday 20 September 2025 10:25:52 +0000 (0:00:00.165) 0:00:11.609 **** 2025-09-20 10:25:53.381131 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:25:53.381144 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:25:53.381154 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:25:53.381164 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:25:53.381174 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:25:53.381183 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:25:53.381200 | orchestrator | 2025-09-20 10:25:53.381217 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-20 10:25:53.381233 | orchestrator | Saturday 20 September 2025 10:25:52 +0000 (0:00:00.649) 0:00:12.259 **** 2025-09-20 10:25:53.381247 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:25:53.381263 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:25:53.381277 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:25:53.381321 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:25:53.381336 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:25:53.381349 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:25:53.381365 | orchestrator | 2025-09-20 10:25:53.381381 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:25:53.381397 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:25:53.381415 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:25:53.381430 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:25:53.381447 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:25:53.381466 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:25:53.381483 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:25:53.381500 | orchestrator | 2025-09-20 10:25:53.381515 | orchestrator | 2025-09-20 10:25:53.381526 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:25:53.381537 | orchestrator | Saturday 20 September 2025 10:25:53 +0000 (0:00:00.239) 0:00:12.499 **** 2025-09-20 10:25:53.381548 | orchestrator | =============================================================================== 2025-09-20 10:25:53.381558 | orchestrator | Gathering Facts --------------------------------------------------------- 3.19s 2025-09-20 10:25:53.381570 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-09-20 10:25:53.381581 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2025-09-20 10:25:53.381593 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-09-20 10:25:53.381603 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-09-20 10:25:53.381614 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-09-20 10:25:53.381625 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-09-20 10:25:53.381637 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-09-20 10:25:53.381648 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-09-20 10:25:53.381657 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-09-20 10:25:53.381673 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-09-20 10:25:53.381689 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2025-09-20 10:25:53.381704 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2025-09-20 10:25:53.381721 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-09-20 10:25:53.381757 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-09-20 10:25:53.381775 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-09-20 10:25:53.381786 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-20 10:25:53.381796 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-09-20 10:25:53.680004 | orchestrator | + osism apply --environment custom facts 2025-09-20 10:25:55.536431 | orchestrator | 2025-09-20 10:25:55 | INFO  | Trying to run play facts in environment custom 2025-09-20 10:26:05.632935 | orchestrator | 2025-09-20 10:26:05 | INFO  | Task ecd11d07-310a-4bb8-bd4f-a91818080ba8 (facts) was prepared for execution. 2025-09-20 10:26:05.633138 | orchestrator | 2025-09-20 10:26:05 | INFO  | It takes a moment until task ecd11d07-310a-4bb8-bd4f-a91818080ba8 (facts) has been started and output is visible here. 2025-09-20 10:26:47.664079 | orchestrator | 2025-09-20 10:26:47.664171 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-20 10:26:47.664180 | orchestrator | 2025-09-20 10:26:47.664185 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-20 10:26:47.664191 | orchestrator | Saturday 20 September 2025 10:26:09 +0000 (0:00:00.069) 0:00:00.069 **** 2025-09-20 10:26:47.664195 | orchestrator | ok: [testbed-manager] 2025-09-20 10:26:47.664202 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:26:47.664208 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:26:47.664213 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:26:47.664217 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:26:47.664222 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:26:47.664226 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:26:47.664231 | orchestrator | 2025-09-20 10:26:47.664236 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-20 10:26:47.664240 | orchestrator | Saturday 20 September 2025 10:26:10 +0000 (0:00:01.288) 0:00:01.358 **** 2025-09-20 10:26:47.664245 | orchestrator | ok: [testbed-manager] 2025-09-20 10:26:47.664249 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:26:47.664254 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:26:47.664258 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:26:47.664263 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:26:47.664267 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:26:47.664272 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:26:47.664276 | orchestrator | 2025-09-20 10:26:47.664281 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-20 10:26:47.664285 | orchestrator | 2025-09-20 10:26:47.664290 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-20 10:26:47.664295 | orchestrator | Saturday 20 September 2025 10:26:11 +0000 (0:00:01.152) 0:00:02.510 **** 2025-09-20 10:26:47.664299 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664304 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664309 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664313 | orchestrator | 2025-09-20 10:26:47.664318 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-20 10:26:47.664323 | orchestrator | Saturday 20 September 2025 10:26:12 +0000 (0:00:00.114) 0:00:02.625 **** 2025-09-20 10:26:47.664328 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664332 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664337 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664341 | orchestrator | 2025-09-20 10:26:47.664346 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-20 10:26:47.664351 | orchestrator | Saturday 20 September 2025 10:26:12 +0000 (0:00:00.194) 0:00:02.819 **** 2025-09-20 10:26:47.664355 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664360 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664365 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664369 | orchestrator | 2025-09-20 10:26:47.664374 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-20 10:26:47.664378 | orchestrator | Saturday 20 September 2025 10:26:12 +0000 (0:00:00.170) 0:00:02.989 **** 2025-09-20 10:26:47.664384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:26:47.664390 | orchestrator | 2025-09-20 10:26:47.664395 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-20 10:26:47.664399 | orchestrator | Saturday 20 September 2025 10:26:12 +0000 (0:00:00.115) 0:00:03.105 **** 2025-09-20 10:26:47.664420 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664425 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664429 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664434 | orchestrator | 2025-09-20 10:26:47.664438 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-20 10:26:47.664443 | orchestrator | Saturday 20 September 2025 10:26:12 +0000 (0:00:00.414) 0:00:03.520 **** 2025-09-20 10:26:47.664447 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:26:47.664452 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:26:47.664456 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:26:47.664461 | orchestrator | 2025-09-20 10:26:47.664465 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-20 10:26:47.664470 | orchestrator | Saturday 20 September 2025 10:26:13 +0000 (0:00:00.122) 0:00:03.642 **** 2025-09-20 10:26:47.664474 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:26:47.664479 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:26:47.664483 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:26:47.664488 | orchestrator | 2025-09-20 10:26:47.664492 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-20 10:26:47.664497 | orchestrator | Saturday 20 September 2025 10:26:14 +0000 (0:00:01.052) 0:00:04.695 **** 2025-09-20 10:26:47.664501 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664506 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664510 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664515 | orchestrator | 2025-09-20 10:26:47.664519 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-20 10:26:47.664525 | orchestrator | Saturday 20 September 2025 10:26:14 +0000 (0:00:00.511) 0:00:05.206 **** 2025-09-20 10:26:47.664529 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:26:47.664534 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:26:47.664538 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:26:47.664543 | orchestrator | 2025-09-20 10:26:47.664547 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-20 10:26:47.664552 | orchestrator | Saturday 20 September 2025 10:26:15 +0000 (0:00:01.051) 0:00:06.258 **** 2025-09-20 10:26:47.664556 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:26:47.664561 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:26:47.664565 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:26:47.664570 | orchestrator | 2025-09-20 10:26:47.664574 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-20 10:26:47.664590 | orchestrator | Saturday 20 September 2025 10:26:32 +0000 (0:00:16.467) 0:00:22.726 **** 2025-09-20 10:26:47.664595 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:26:47.664600 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:26:47.664604 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:26:47.664609 | orchestrator | 2025-09-20 10:26:47.664613 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-20 10:26:47.664628 | orchestrator | Saturday 20 September 2025 10:26:32 +0000 (0:00:00.130) 0:00:22.857 **** 2025-09-20 10:26:47.664633 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:26:47.664638 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:26:47.664642 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:26:47.664647 | orchestrator | 2025-09-20 10:26:47.664651 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-20 10:26:47.664656 | orchestrator | Saturday 20 September 2025 10:26:39 +0000 (0:00:06.836) 0:00:29.693 **** 2025-09-20 10:26:47.664661 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664665 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664670 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664674 | orchestrator | 2025-09-20 10:26:47.664679 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-20 10:26:47.664683 | orchestrator | Saturday 20 September 2025 10:26:39 +0000 (0:00:00.408) 0:00:30.102 **** 2025-09-20 10:26:47.664688 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-20 10:26:47.664696 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-20 10:26:47.664701 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-20 10:26:47.664705 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-20 10:26:47.664710 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-20 10:26:47.664715 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-20 10:26:47.664719 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-20 10:26:47.664723 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-20 10:26:47.664728 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-20 10:26:47.664733 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-20 10:26:47.664737 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-20 10:26:47.664742 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-20 10:26:47.664746 | orchestrator | 2025-09-20 10:26:47.664751 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-20 10:26:47.664755 | orchestrator | Saturday 20 September 2025 10:26:42 +0000 (0:00:03.328) 0:00:33.430 **** 2025-09-20 10:26:47.664760 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664764 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664769 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664773 | orchestrator | 2025-09-20 10:26:47.664778 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 10:26:47.664782 | orchestrator | 2025-09-20 10:26:47.664787 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:26:47.664791 | orchestrator | Saturday 20 September 2025 10:26:44 +0000 (0:00:01.129) 0:00:34.560 **** 2025-09-20 10:26:47.664796 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:26:47.664801 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:26:47.664805 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:26:47.664810 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:26:47.664814 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:26:47.664819 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:26:47.664823 | orchestrator | ok: [testbed-manager] 2025-09-20 10:26:47.664828 | orchestrator | 2025-09-20 10:26:47.664832 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:26:47.664837 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:26:47.664842 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:26:47.664848 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:26:47.664853 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:26:47.664857 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:26:47.664862 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:26:47.664870 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:26:47.664874 | orchestrator | 2025-09-20 10:26:47.664879 | orchestrator | 2025-09-20 10:26:47.664884 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:26:47.664888 | orchestrator | Saturday 20 September 2025 10:26:47 +0000 (0:00:03.647) 0:00:38.208 **** 2025-09-20 10:26:47.664893 | orchestrator | =============================================================================== 2025-09-20 10:26:47.664901 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.47s 2025-09-20 10:26:47.664905 | orchestrator | Install required packages (Debian) -------------------------------------- 6.84s 2025-09-20 10:26:47.664910 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.65s 2025-09-20 10:26:47.664914 | orchestrator | Copy fact files --------------------------------------------------------- 3.33s 2025-09-20 10:26:47.664919 | orchestrator | Create custom facts directory ------------------------------------------- 1.29s 2025-09-20 10:26:47.664923 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2025-09-20 10:26:47.664931 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.13s 2025-09-20 10:26:47.857949 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-09-20 10:26:47.858077 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-09-20 10:26:47.858085 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-09-20 10:26:47.858090 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-09-20 10:26:47.858095 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-09-20 10:26:47.858100 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-09-20 10:26:47.858105 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-09-20 10:26:47.858109 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-09-20 10:26:47.858114 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-09-20 10:26:47.858118 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-09-20 10:26:47.858124 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-09-20 10:26:48.108169 | orchestrator | + osism apply bootstrap 2025-09-20 10:26:59.920709 | orchestrator | 2025-09-20 10:26:59 | INFO  | Task 4c78111c-9041-4ef2-bf0e-4e4114224a8f (bootstrap) was prepared for execution. 2025-09-20 10:26:59.920825 | orchestrator | 2025-09-20 10:26:59 | INFO  | It takes a moment until task 4c78111c-9041-4ef2-bf0e-4e4114224a8f (bootstrap) has been started and output is visible here. 2025-09-20 10:27:16.034770 | orchestrator | 2025-09-20 10:27:16.034889 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-20 10:27:16.034907 | orchestrator | 2025-09-20 10:27:16.034919 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-20 10:27:16.034931 | orchestrator | Saturday 20 September 2025 10:27:03 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-20 10:27:16.034943 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:16.034956 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:16.034967 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:16.034978 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:16.034989 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:16.035000 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:16.035010 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:16.035021 | orchestrator | 2025-09-20 10:27:16.035032 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 10:27:16.035095 | orchestrator | 2025-09-20 10:27:16.035108 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:27:16.035119 | orchestrator | Saturday 20 September 2025 10:27:04 +0000 (0:00:00.242) 0:00:00.409 **** 2025-09-20 10:27:16.035130 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:16.035141 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:16.035152 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:16.035163 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:16.035174 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:16.035184 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:16.035195 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:16.035231 | orchestrator | 2025-09-20 10:27:16.035243 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-20 10:27:16.035254 | orchestrator | 2025-09-20 10:27:16.035265 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:27:16.035276 | orchestrator | Saturday 20 September 2025 10:27:07 +0000 (0:00:03.826) 0:00:04.236 **** 2025-09-20 10:27:16.035287 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-20 10:27:16.035300 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-20 10:27:16.035313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-20 10:27:16.035325 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-20 10:27:16.035338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 10:27:16.035350 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-20 10:27:16.035362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 10:27:16.035375 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-20 10:27:16.035388 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-20 10:27:16.035400 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-20 10:27:16.035412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 10:27:16.035425 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-20 10:27:16.035438 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-20 10:27:16.035451 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-20 10:27:16.035463 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-20 10:27:16.035476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-20 10:27:16.035488 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-20 10:27:16.035501 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-20 10:27:16.035514 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:16.035526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-20 10:27:16.035539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-20 10:27:16.035551 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:27:16.035563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-20 10:27:16.035576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-20 10:27:16.035588 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-20 10:27:16.035601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-20 10:27:16.035612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-20 10:27:16.035625 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-20 10:27:16.035637 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-20 10:27:16.035649 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-20 10:27:16.035660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 10:27:16.035670 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-20 10:27:16.035681 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-20 10:27:16.035691 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:27:16.035702 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-20 10:27:16.035713 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:27:16.035723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 10:27:16.035734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-20 10:27:16.035744 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-20 10:27:16.035755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 10:27:16.035783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-20 10:27:16.035803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-20 10:27:16.035814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:27:16.035825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-20 10:27:16.035836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-20 10:27:16.035847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:27:16.035876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-20 10:27:16.035887 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-20 10:27:16.035898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:27:16.035909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-20 10:27:16.035919 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:27:16.035930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-20 10:27:16.035941 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:27:16.035952 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-20 10:27:16.035963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-20 10:27:16.035973 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:27:16.035984 | orchestrator | 2025-09-20 10:27:16.035995 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-20 10:27:16.036005 | orchestrator | 2025-09-20 10:27:16.036016 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-20 10:27:16.036027 | orchestrator | Saturday 20 September 2025 10:27:08 +0000 (0:00:00.457) 0:00:04.694 **** 2025-09-20 10:27:16.036038 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:16.036068 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:16.036079 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:16.036090 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:16.036100 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:16.036111 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:16.036122 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:16.036132 | orchestrator | 2025-09-20 10:27:16.036143 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-20 10:27:16.036154 | orchestrator | Saturday 20 September 2025 10:27:09 +0000 (0:00:01.187) 0:00:05.881 **** 2025-09-20 10:27:16.036165 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:16.036176 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:16.036186 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:16.036197 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:16.036208 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:16.036218 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:16.036229 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:16.036239 | orchestrator | 2025-09-20 10:27:16.036250 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-20 10:27:16.036261 | orchestrator | Saturday 20 September 2025 10:27:10 +0000 (0:00:01.091) 0:00:06.973 **** 2025-09-20 10:27:16.036273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:27:16.036286 | orchestrator | 2025-09-20 10:27:16.036297 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-20 10:27:16.036308 | orchestrator | Saturday 20 September 2025 10:27:10 +0000 (0:00:00.255) 0:00:07.229 **** 2025-09-20 10:27:16.036319 | orchestrator | changed: [testbed-manager] 2025-09-20 10:27:16.036330 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:16.036346 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:16.036357 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:27:16.036368 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:27:16.036378 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:16.036389 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:27:16.036400 | orchestrator | 2025-09-20 10:27:16.036417 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-20 10:27:16.036429 | orchestrator | Saturday 20 September 2025 10:27:13 +0000 (0:00:02.853) 0:00:10.082 **** 2025-09-20 10:27:16.036439 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:16.036452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:27:16.036464 | orchestrator | 2025-09-20 10:27:16.036475 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-20 10:27:16.036486 | orchestrator | Saturday 20 September 2025 10:27:13 +0000 (0:00:00.216) 0:00:10.299 **** 2025-09-20 10:27:16.036497 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:16.036508 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:16.036518 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:16.036529 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:27:16.036539 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:27:16.036550 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:27:16.036561 | orchestrator | 2025-09-20 10:27:16.036571 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-20 10:27:16.036582 | orchestrator | Saturday 20 September 2025 10:27:14 +0000 (0:00:00.917) 0:00:11.217 **** 2025-09-20 10:27:16.036593 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:16.036604 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:27:16.036614 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:16.036625 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:27:16.036642 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:27:16.036659 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:16.036678 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:16.036696 | orchestrator | 2025-09-20 10:27:16.036714 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-20 10:27:16.036731 | orchestrator | Saturday 20 September 2025 10:27:15 +0000 (0:00:00.539) 0:00:11.756 **** 2025-09-20 10:27:16.036748 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:27:16.036765 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:27:16.036784 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:27:16.036802 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:27:16.036822 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:27:16.036839 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:27:16.036855 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:16.036866 | orchestrator | 2025-09-20 10:27:16.036877 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-20 10:27:16.036889 | orchestrator | Saturday 20 September 2025 10:27:15 +0000 (0:00:00.462) 0:00:12.219 **** 2025-09-20 10:27:16.036900 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:16.036911 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:27:16.036931 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:27:27.796604 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:27:27.796721 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:27:27.796736 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:27:27.796748 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:27:27.796759 | orchestrator | 2025-09-20 10:27:27.796771 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-20 10:27:27.796784 | orchestrator | Saturday 20 September 2025 10:27:16 +0000 (0:00:00.264) 0:00:12.484 **** 2025-09-20 10:27:27.796797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:27:27.796827 | orchestrator | 2025-09-20 10:27:27.796838 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-20 10:27:27.796850 | orchestrator | Saturday 20 September 2025 10:27:16 +0000 (0:00:00.351) 0:00:12.835 **** 2025-09-20 10:27:27.796888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:27:27.796900 | orchestrator | 2025-09-20 10:27:27.796911 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-20 10:27:27.796922 | orchestrator | Saturday 20 September 2025 10:27:16 +0000 (0:00:00.301) 0:00:13.137 **** 2025-09-20 10:27:27.796932 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.796945 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.796955 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.796966 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.796976 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.796987 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.796997 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.797008 | orchestrator | 2025-09-20 10:27:27.797018 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-20 10:27:27.797029 | orchestrator | Saturday 20 September 2025 10:27:17 +0000 (0:00:01.214) 0:00:14.351 **** 2025-09-20 10:27:27.797040 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:27.797079 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:27:27.797090 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:27:27.797101 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:27:27.797112 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:27:27.797123 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:27:27.797135 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:27:27.797147 | orchestrator | 2025-09-20 10:27:27.797160 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-20 10:27:27.797172 | orchestrator | Saturday 20 September 2025 10:27:18 +0000 (0:00:00.244) 0:00:14.596 **** 2025-09-20 10:27:27.797185 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.797196 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.797208 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.797220 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.797232 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.797244 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.797256 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.797268 | orchestrator | 2025-09-20 10:27:27.797280 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-20 10:27:27.797292 | orchestrator | Saturday 20 September 2025 10:27:18 +0000 (0:00:00.522) 0:00:15.119 **** 2025-09-20 10:27:27.797304 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:27.797316 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:27:27.797328 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:27:27.797341 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:27:27.797352 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:27:27.797363 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:27:27.797375 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:27:27.797386 | orchestrator | 2025-09-20 10:27:27.797398 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-20 10:27:27.797411 | orchestrator | Saturday 20 September 2025 10:27:19 +0000 (0:00:00.255) 0:00:15.374 **** 2025-09-20 10:27:27.797423 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.797435 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:27.797446 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:27.797458 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:27.797470 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:27:27.797482 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:27:27.797493 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:27:27.797503 | orchestrator | 2025-09-20 10:27:27.797514 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-20 10:27:27.797525 | orchestrator | Saturday 20 September 2025 10:27:19 +0000 (0:00:00.527) 0:00:15.902 **** 2025-09-20 10:27:27.797544 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.797555 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:27.797565 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:27.797576 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:27:27.797586 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:27.797597 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:27:27.797608 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:27:27.797618 | orchestrator | 2025-09-20 10:27:27.797629 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-20 10:27:27.797640 | orchestrator | Saturday 20 September 2025 10:27:20 +0000 (0:00:01.177) 0:00:17.079 **** 2025-09-20 10:27:27.797651 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.797662 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.797672 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.797683 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.797694 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.797705 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.797716 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.797726 | orchestrator | 2025-09-20 10:27:27.797737 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-20 10:27:27.797748 | orchestrator | Saturday 20 September 2025 10:27:21 +0000 (0:00:01.125) 0:00:18.205 **** 2025-09-20 10:27:27.797776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:27:27.797787 | orchestrator | 2025-09-20 10:27:27.797798 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-20 10:27:27.797809 | orchestrator | Saturday 20 September 2025 10:27:22 +0000 (0:00:00.443) 0:00:18.649 **** 2025-09-20 10:27:27.797820 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:27.797830 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:27.797841 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:27:27.797852 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:27:27.797862 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:27:27.797873 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:27.797883 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:27.797894 | orchestrator | 2025-09-20 10:27:27.797905 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-20 10:27:27.797916 | orchestrator | Saturday 20 September 2025 10:27:23 +0000 (0:00:01.227) 0:00:19.876 **** 2025-09-20 10:27:27.797926 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.797937 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.797948 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.797958 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.797969 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.797979 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.797990 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.798001 | orchestrator | 2025-09-20 10:27:27.798011 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-20 10:27:27.798103 | orchestrator | Saturday 20 September 2025 10:27:23 +0000 (0:00:00.224) 0:00:20.100 **** 2025-09-20 10:27:27.798115 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.798126 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.798136 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.798147 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.798157 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.798167 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.798178 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.798189 | orchestrator | 2025-09-20 10:27:27.798199 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-20 10:27:27.798210 | orchestrator | Saturday 20 September 2025 10:27:23 +0000 (0:00:00.216) 0:00:20.317 **** 2025-09-20 10:27:27.798221 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.798231 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.798250 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.798260 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.798271 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.798281 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.798292 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.798302 | orchestrator | 2025-09-20 10:27:27.798313 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-20 10:27:27.798370 | orchestrator | Saturday 20 September 2025 10:27:24 +0000 (0:00:00.207) 0:00:20.525 **** 2025-09-20 10:27:27.798388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:27:27.798401 | orchestrator | 2025-09-20 10:27:27.798412 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-20 10:27:27.798423 | orchestrator | Saturday 20 September 2025 10:27:24 +0000 (0:00:00.284) 0:00:20.809 **** 2025-09-20 10:27:27.798433 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.798444 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.798454 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.798465 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.798476 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.798486 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.798497 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.798507 | orchestrator | 2025-09-20 10:27:27.798518 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-20 10:27:27.798529 | orchestrator | Saturday 20 September 2025 10:27:24 +0000 (0:00:00.520) 0:00:21.330 **** 2025-09-20 10:27:27.798540 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:27:27.798550 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:27:27.798561 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:27:27.798572 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:27:27.798582 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:27:27.798593 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:27:27.798604 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:27:27.798614 | orchestrator | 2025-09-20 10:27:27.798625 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-20 10:27:27.798636 | orchestrator | Saturday 20 September 2025 10:27:25 +0000 (0:00:00.220) 0:00:21.550 **** 2025-09-20 10:27:27.798647 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.798657 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:27.798668 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:27:27.798679 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.798689 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.798700 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:27:27.798710 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.798721 | orchestrator | 2025-09-20 10:27:27.798732 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-20 10:27:27.798742 | orchestrator | Saturday 20 September 2025 10:27:26 +0000 (0:00:00.969) 0:00:22.519 **** 2025-09-20 10:27:27.798753 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.798764 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:27:27.798774 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:27:27.798785 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:27:27.798796 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.798806 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.798817 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:27:27.798827 | orchestrator | 2025-09-20 10:27:27.798838 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-20 10:27:27.798849 | orchestrator | Saturday 20 September 2025 10:27:26 +0000 (0:00:00.541) 0:00:23.061 **** 2025-09-20 10:27:27.798860 | orchestrator | ok: [testbed-manager] 2025-09-20 10:27:27.798870 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:27:27.798881 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:27:27.798892 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:27:27.798919 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.326229 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:28:07.326364 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:28:07.326381 | orchestrator | 2025-09-20 10:28:07.326393 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-20 10:28:07.326407 | orchestrator | Saturday 20 September 2025 10:27:27 +0000 (0:00:01.079) 0:00:24.140 **** 2025-09-20 10:28:07.326418 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.326430 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.326441 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.326452 | orchestrator | changed: [testbed-manager] 2025-09-20 10:28:07.326463 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:28:07.326473 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:28:07.326484 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:28:07.326495 | orchestrator | 2025-09-20 10:28:07.326506 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-20 10:28:07.326517 | orchestrator | Saturday 20 September 2025 10:27:43 +0000 (0:00:15.908) 0:00:40.049 **** 2025-09-20 10:28:07.326527 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.326538 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.326549 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.326560 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.326570 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.326581 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.326591 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.326602 | orchestrator | 2025-09-20 10:28:07.326613 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-20 10:28:07.326623 | orchestrator | Saturday 20 September 2025 10:27:43 +0000 (0:00:00.177) 0:00:40.226 **** 2025-09-20 10:28:07.326634 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.326645 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.326655 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.326666 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.326676 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.326687 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.326697 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.326708 | orchestrator | 2025-09-20 10:28:07.326719 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-20 10:28:07.326729 | orchestrator | Saturday 20 September 2025 10:27:44 +0000 (0:00:00.171) 0:00:40.397 **** 2025-09-20 10:28:07.326740 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.326751 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.326762 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.326772 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.326783 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.326793 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.326804 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.326814 | orchestrator | 2025-09-20 10:28:07.326825 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-20 10:28:07.326836 | orchestrator | Saturday 20 September 2025 10:27:44 +0000 (0:00:00.179) 0:00:40.577 **** 2025-09-20 10:28:07.326869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:28:07.326883 | orchestrator | 2025-09-20 10:28:07.326894 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-20 10:28:07.326905 | orchestrator | Saturday 20 September 2025 10:27:44 +0000 (0:00:00.251) 0:00:40.829 **** 2025-09-20 10:28:07.326916 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.326927 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.326937 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.326948 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.326958 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.326969 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.326979 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.327014 | orchestrator | 2025-09-20 10:28:07.327025 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-20 10:28:07.327036 | orchestrator | Saturday 20 September 2025 10:27:45 +0000 (0:00:01.317) 0:00:42.146 **** 2025-09-20 10:28:07.327089 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:28:07.327110 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:28:07.327136 | orchestrator | changed: [testbed-manager] 2025-09-20 10:28:07.327157 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:28:07.327174 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:28:07.327191 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:28:07.327208 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:28:07.327223 | orchestrator | 2025-09-20 10:28:07.327242 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-20 10:28:07.327261 | orchestrator | Saturday 20 September 2025 10:27:46 +0000 (0:00:00.930) 0:00:43.076 **** 2025-09-20 10:28:07.327280 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.327297 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.327317 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.327336 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.327355 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.327372 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.327389 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.327406 | orchestrator | 2025-09-20 10:28:07.327424 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-20 10:28:07.327443 | orchestrator | Saturday 20 September 2025 10:27:47 +0000 (0:00:00.744) 0:00:43.821 **** 2025-09-20 10:28:07.327462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:28:07.327482 | orchestrator | 2025-09-20 10:28:07.327494 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-20 10:28:07.327506 | orchestrator | Saturday 20 September 2025 10:27:47 +0000 (0:00:00.347) 0:00:44.169 **** 2025-09-20 10:28:07.327517 | orchestrator | changed: [testbed-manager] 2025-09-20 10:28:07.327527 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:28:07.327538 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:28:07.327549 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:28:07.327560 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:28:07.327571 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:28:07.327582 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:28:07.327592 | orchestrator | 2025-09-20 10:28:07.327627 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-20 10:28:07.327639 | orchestrator | Saturday 20 September 2025 10:27:48 +0000 (0:00:01.031) 0:00:45.201 **** 2025-09-20 10:28:07.327649 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:28:07.327660 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:28:07.327671 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:28:07.327681 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:28:07.327692 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:28:07.327702 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:28:07.327713 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:28:07.327723 | orchestrator | 2025-09-20 10:28:07.327734 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-20 10:28:07.327745 | orchestrator | Saturday 20 September 2025 10:27:49 +0000 (0:00:00.324) 0:00:45.525 **** 2025-09-20 10:28:07.327755 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:28:07.327766 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:28:07.327776 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:28:07.327787 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:28:07.327797 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:28:07.327808 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:28:07.327818 | orchestrator | changed: [testbed-manager] 2025-09-20 10:28:07.327842 | orchestrator | 2025-09-20 10:28:07.327853 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-20 10:28:07.327864 | orchestrator | Saturday 20 September 2025 10:28:01 +0000 (0:00:11.917) 0:00:57.443 **** 2025-09-20 10:28:07.327875 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.327885 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.327896 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.327907 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.327917 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.327928 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.327939 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.327949 | orchestrator | 2025-09-20 10:28:07.327960 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-20 10:28:07.327971 | orchestrator | Saturday 20 September 2025 10:28:02 +0000 (0:00:01.620) 0:00:59.063 **** 2025-09-20 10:28:07.327982 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.327992 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.328003 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.328013 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.328024 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.328035 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.328076 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.328088 | orchestrator | 2025-09-20 10:28:07.328099 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-20 10:28:07.328110 | orchestrator | Saturday 20 September 2025 10:28:04 +0000 (0:00:01.644) 0:01:00.708 **** 2025-09-20 10:28:07.328121 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.328132 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.328142 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.328153 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.328163 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.328174 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.328185 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.328195 | orchestrator | 2025-09-20 10:28:07.328206 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-20 10:28:07.328217 | orchestrator | Saturday 20 September 2025 10:28:04 +0000 (0:00:00.181) 0:01:00.890 **** 2025-09-20 10:28:07.328228 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.328239 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.328249 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.328260 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.328270 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.328281 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.328291 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.328302 | orchestrator | 2025-09-20 10:28:07.328313 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-20 10:28:07.328323 | orchestrator | Saturday 20 September 2025 10:28:04 +0000 (0:00:00.202) 0:01:01.093 **** 2025-09-20 10:28:07.328335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:28:07.328346 | orchestrator | 2025-09-20 10:28:07.328357 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-20 10:28:07.328368 | orchestrator | Saturday 20 September 2025 10:28:04 +0000 (0:00:00.234) 0:01:01.327 **** 2025-09-20 10:28:07.328379 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.328390 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.328401 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.328411 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.328422 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.328432 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.328443 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.328453 | orchestrator | 2025-09-20 10:28:07.328464 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-20 10:28:07.328475 | orchestrator | Saturday 20 September 2025 10:28:06 +0000 (0:00:01.561) 0:01:02.889 **** 2025-09-20 10:28:07.328493 | orchestrator | changed: [testbed-manager] 2025-09-20 10:28:07.328503 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:28:07.328514 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:28:07.328525 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:28:07.328536 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:28:07.328547 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:28:07.328557 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:28:07.328568 | orchestrator | 2025-09-20 10:28:07.328579 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-20 10:28:07.328589 | orchestrator | Saturday 20 September 2025 10:28:07 +0000 (0:00:00.541) 0:01:03.431 **** 2025-09-20 10:28:07.328600 | orchestrator | ok: [testbed-manager] 2025-09-20 10:28:07.328611 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:28:07.328622 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:28:07.328632 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:28:07.328643 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:28:07.328653 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:28:07.328664 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:28:07.328674 | orchestrator | 2025-09-20 10:28:07.328693 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-20 10:30:14.536868 | orchestrator | Saturday 20 September 2025 10:28:07 +0000 (0:00:00.243) 0:01:03.674 **** 2025-09-20 10:30:14.536965 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:14.536981 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:14.536992 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:14.537003 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:14.537014 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:14.537025 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:14.537036 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:14.537047 | orchestrator | 2025-09-20 10:30:14.537058 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-20 10:30:14.537094 | orchestrator | Saturday 20 September 2025 10:28:09 +0000 (0:00:01.954) 0:01:05.629 **** 2025-09-20 10:30:14.537106 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:14.537117 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:14.537128 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:14.537139 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:14.537150 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:14.537160 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:14.537171 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:14.537182 | orchestrator | 2025-09-20 10:30:14.537193 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-20 10:30:14.537205 | orchestrator | Saturday 20 September 2025 10:28:10 +0000 (0:00:01.449) 0:01:07.079 **** 2025-09-20 10:30:14.537216 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:14.537226 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:14.537237 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:14.537248 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:14.537259 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:14.537270 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:14.537280 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:14.537291 | orchestrator | 2025-09-20 10:30:14.537302 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-20 10:30:14.537313 | orchestrator | Saturday 20 September 2025 10:28:12 +0000 (0:00:02.061) 0:01:09.140 **** 2025-09-20 10:30:14.537324 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:14.537335 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:14.537345 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:14.537356 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:14.537381 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:14.537392 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:14.537403 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:14.537415 | orchestrator | 2025-09-20 10:30:14.537428 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-20 10:30:14.537463 | orchestrator | Saturday 20 September 2025 10:28:47 +0000 (0:00:34.595) 0:01:43.736 **** 2025-09-20 10:30:14.537475 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:14.537488 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:14.537499 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:14.537512 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:14.537523 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:14.537534 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:14.537544 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:14.537555 | orchestrator | 2025-09-20 10:30:14.537571 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-20 10:30:14.537583 | orchestrator | Saturday 20 September 2025 10:30:00 +0000 (0:01:12.695) 0:02:56.431 **** 2025-09-20 10:30:14.537594 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:14.537605 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:14.537616 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:14.537626 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:14.537637 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:14.537648 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:14.537658 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:14.537669 | orchestrator | 2025-09-20 10:30:14.537680 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-20 10:30:14.537691 | orchestrator | Saturday 20 September 2025 10:30:01 +0000 (0:00:01.621) 0:02:58.052 **** 2025-09-20 10:30:14.537702 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:14.537713 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:14.537723 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:14.537734 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:14.537744 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:14.537755 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:14.537765 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:14.537776 | orchestrator | 2025-09-20 10:30:14.537787 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-20 10:30:14.537798 | orchestrator | Saturday 20 September 2025 10:30:13 +0000 (0:00:11.665) 0:03:09.717 **** 2025-09-20 10:30:14.537864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-20 10:30:14.537881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-20 10:30:14.537929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-20 10:30:14.537949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-20 10:30:14.537970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-20 10:30:14.537981 | orchestrator | 2025-09-20 10:30:14.537992 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-20 10:30:14.538003 | orchestrator | Saturday 20 September 2025 10:30:13 +0000 (0:00:00.408) 0:03:10.126 **** 2025-09-20 10:30:14.538044 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 10:30:14.538058 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:14.538085 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 10:30:14.538096 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 10:30:14.538107 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:30:14.538117 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:30:14.538128 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 10:30:14.538139 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:30:14.538149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 10:30:14.538160 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 10:30:14.538171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 10:30:14.538181 | orchestrator | 2025-09-20 10:30:14.538192 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-20 10:30:14.538208 | orchestrator | Saturday 20 September 2025 10:30:14 +0000 (0:00:00.568) 0:03:10.694 **** 2025-09-20 10:30:14.538231 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 10:30:14.538243 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 10:30:14.538254 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 10:30:14.538265 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 10:30:14.538275 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 10:30:14.538286 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 10:30:14.538297 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 10:30:14.538307 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 10:30:14.538318 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 10:30:14.538329 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 10:30:14.538340 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:14.538351 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 10:30:14.538361 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 10:30:14.538372 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 10:30:14.538383 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 10:30:14.538394 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 10:30:14.538404 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 10:30:14.538423 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 10:30:14.538434 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 10:30:14.538444 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 10:30:14.538455 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 10:30:14.538474 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 10:30:21.252828 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 10:30:21.252946 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 10:30:21.252962 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 10:30:21.252974 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 10:30:21.252987 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:30:21.252999 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 10:30:21.253010 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 10:30:21.253022 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 10:30:21.253032 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 10:30:21.253043 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 10:30:21.253054 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:30:21.253104 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 10:30:21.253116 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 10:30:21.253127 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 10:30:21.253138 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 10:30:21.253149 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 10:30:21.253159 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 10:30:21.253170 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 10:30:21.253181 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 10:30:21.253192 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 10:30:21.253220 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 10:30:21.253232 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:30:21.253242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-20 10:30:21.253254 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-20 10:30:21.253265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-20 10:30:21.253275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-20 10:30:21.253286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-20 10:30:21.253297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-20 10:30:21.253308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-20 10:30:21.253345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-20 10:30:21.253359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-20 10:30:21.253372 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-20 10:30:21.253384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-20 10:30:21.253397 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-20 10:30:21.253409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-20 10:30:21.253421 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-20 10:30:21.253433 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-20 10:30:21.253445 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-20 10:30:21.253457 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-20 10:30:21.253469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-20 10:30:21.253481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-20 10:30:21.253493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-20 10:30:21.253506 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-20 10:30:21.253537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-20 10:30:21.253550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-20 10:30:21.253563 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-20 10:30:21.253575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-20 10:30:21.253587 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-20 10:30:21.253599 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-20 10:30:21.253611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-20 10:30:21.253623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-20 10:30:21.253635 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-20 10:30:21.253647 | orchestrator | 2025-09-20 10:30:21.253660 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-20 10:30:21.253672 | orchestrator | Saturday 20 September 2025 10:30:18 +0000 (0:00:04.009) 0:03:14.703 **** 2025-09-20 10:30:21.253684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253709 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253719 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253730 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253740 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253751 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 10:30:21.253761 | orchestrator | 2025-09-20 10:30:21.253772 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-20 10:30:21.253799 | orchestrator | Saturday 20 September 2025 10:30:19 +0000 (0:00:01.363) 0:03:16.067 **** 2025-09-20 10:30:21.253810 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 10:30:21.253821 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:21.253832 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 10:30:21.253843 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:30:21.253853 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 10:30:21.253864 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 10:30:21.253875 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:30:21.253886 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:30:21.253904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-20 10:30:21.253916 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-20 10:30:21.253926 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-20 10:30:21.253937 | orchestrator | 2025-09-20 10:30:21.253948 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-20 10:30:21.253958 | orchestrator | Saturday 20 September 2025 10:30:20 +0000 (0:00:00.486) 0:03:16.553 **** 2025-09-20 10:30:21.253969 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 10:30:21.253980 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:21.253990 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 10:30:21.254001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 10:30:21.254012 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:30:21.254094 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:30:21.254105 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 10:30:21.254116 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:30:21.254126 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-20 10:30:21.254137 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-20 10:30:21.254148 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-20 10:30:21.254159 | orchestrator | 2025-09-20 10:30:21.254169 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-20 10:30:21.254180 | orchestrator | Saturday 20 September 2025 10:30:20 +0000 (0:00:00.676) 0:03:17.230 **** 2025-09-20 10:30:21.254191 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:21.254202 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:30:21.254213 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:30:21.254223 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:30:21.254234 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:30:21.254252 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:30:32.469465 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:30:32.469565 | orchestrator | 2025-09-20 10:30:32.469575 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-20 10:30:32.469584 | orchestrator | Saturday 20 September 2025 10:30:21 +0000 (0:00:00.374) 0:03:17.605 **** 2025-09-20 10:30:32.469591 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:32.469600 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:32.469607 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:32.469613 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:32.469638 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:32.469645 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:32.469651 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:32.469657 | orchestrator | 2025-09-20 10:30:32.469664 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-20 10:30:32.469670 | orchestrator | Saturday 20 September 2025 10:30:26 +0000 (0:00:05.498) 0:03:23.103 **** 2025-09-20 10:30:32.469676 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-20 10:30:32.469683 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-20 10:30:32.469689 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:32.469695 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-20 10:30:32.469701 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:30:32.469707 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-20 10:30:32.469713 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:30:32.469719 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-20 10:30:32.469725 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:30:32.469732 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:30:32.469738 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-20 10:30:32.469747 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:30:32.469753 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-20 10:30:32.469759 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:30:32.469765 | orchestrator | 2025-09-20 10:30:32.469772 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-20 10:30:32.469778 | orchestrator | Saturday 20 September 2025 10:30:27 +0000 (0:00:00.338) 0:03:23.442 **** 2025-09-20 10:30:32.469784 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-20 10:30:32.469790 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-20 10:30:32.469796 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-20 10:30:32.469802 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-20 10:30:32.469808 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-20 10:30:32.469814 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-20 10:30:32.469820 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-20 10:30:32.469826 | orchestrator | 2025-09-20 10:30:32.469832 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-20 10:30:32.469850 | orchestrator | Saturday 20 September 2025 10:30:28 +0000 (0:00:01.076) 0:03:24.518 **** 2025-09-20 10:30:32.469859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:30:32.469868 | orchestrator | 2025-09-20 10:30:32.469874 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-20 10:30:32.469880 | orchestrator | Saturday 20 September 2025 10:30:28 +0000 (0:00:00.504) 0:03:25.023 **** 2025-09-20 10:30:32.469886 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:32.469892 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:32.469898 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:32.469904 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:32.469910 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:32.469916 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:32.469922 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:32.469928 | orchestrator | 2025-09-20 10:30:32.469935 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-20 10:30:32.469941 | orchestrator | Saturday 20 September 2025 10:30:29 +0000 (0:00:01.253) 0:03:26.276 **** 2025-09-20 10:30:32.469947 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:32.469953 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:32.469959 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:32.469965 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:32.469971 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:32.469977 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:32.469989 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:32.469995 | orchestrator | 2025-09-20 10:30:32.470001 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-20 10:30:32.470008 | orchestrator | Saturday 20 September 2025 10:30:30 +0000 (0:00:00.555) 0:03:26.832 **** 2025-09-20 10:30:32.470055 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:32.470064 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:32.470097 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:32.470105 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:32.470112 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:32.470119 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:32.470126 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:32.470133 | orchestrator | 2025-09-20 10:30:32.470140 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-20 10:30:32.470147 | orchestrator | Saturday 20 September 2025 10:30:31 +0000 (0:00:00.593) 0:03:27.425 **** 2025-09-20 10:30:32.470153 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:32.470161 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:32.470168 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:32.470175 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:32.470182 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:32.470189 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:32.470196 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:32.470203 | orchestrator | 2025-09-20 10:30:32.470209 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-20 10:30:32.470216 | orchestrator | Saturday 20 September 2025 10:30:31 +0000 (0:00:00.517) 0:03:27.943 **** 2025-09-20 10:30:32.470241 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362897.415245, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470251 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362928.005412, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470259 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362931.2305043, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470271 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362940.8750923, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470279 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362936.842535, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470292 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362931.3017032, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470299 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758362937.5920243, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:32.470379 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.380811 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.380937 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.380954 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.380967 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.380999 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.381010 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 10:30:47.381022 | orchestrator | 2025-09-20 10:30:47.381036 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-20 10:30:47.381049 | orchestrator | Saturday 20 September 2025 10:30:32 +0000 (0:00:00.872) 0:03:28.815 **** 2025-09-20 10:30:47.381060 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:47.381132 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:47.381144 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:47.381155 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:47.381165 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:47.381176 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:47.381186 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:47.381197 | orchestrator | 2025-09-20 10:30:47.381207 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-20 10:30:47.381218 | orchestrator | Saturday 20 September 2025 10:30:33 +0000 (0:00:01.040) 0:03:29.856 **** 2025-09-20 10:30:47.381229 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:47.381240 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:47.381250 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:47.381260 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:47.381289 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:47.381301 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:47.381311 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:47.381321 | orchestrator | 2025-09-20 10:30:47.381333 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-20 10:30:47.381345 | orchestrator | Saturday 20 September 2025 10:30:34 +0000 (0:00:01.193) 0:03:31.050 **** 2025-09-20 10:30:47.381357 | orchestrator | changed: [testbed-manager] 2025-09-20 10:30:47.381368 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:47.381380 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:47.381392 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:47.381403 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:47.381416 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:47.381427 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:47.381439 | orchestrator | 2025-09-20 10:30:47.381451 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-20 10:30:47.381463 | orchestrator | Saturday 20 September 2025 10:30:35 +0000 (0:00:01.054) 0:03:32.104 **** 2025-09-20 10:30:47.381484 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:30:47.381497 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:30:47.381508 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:30:47.381535 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:30:47.381548 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:30:47.381559 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:30:47.381572 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:30:47.381583 | orchestrator | 2025-09-20 10:30:47.381594 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-20 10:30:47.381606 | orchestrator | Saturday 20 September 2025 10:30:35 +0000 (0:00:00.222) 0:03:32.326 **** 2025-09-20 10:30:47.381619 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.381632 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:47.381644 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:47.381656 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:47.381668 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:47.381679 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:47.381691 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:47.381701 | orchestrator | 2025-09-20 10:30:47.381712 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-20 10:30:47.381722 | orchestrator | Saturday 20 September 2025 10:30:36 +0000 (0:00:00.660) 0:03:32.987 **** 2025-09-20 10:30:47.381739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:30:47.381753 | orchestrator | 2025-09-20 10:30:47.381764 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-20 10:30:47.381774 | orchestrator | Saturday 20 September 2025 10:30:36 +0000 (0:00:00.350) 0:03:33.337 **** 2025-09-20 10:30:47.381785 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.381795 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:30:47.381806 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:30:47.381816 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:30:47.381827 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:30:47.381837 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:30:47.381847 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:30:47.381858 | orchestrator | 2025-09-20 10:30:47.381868 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-20 10:30:47.381879 | orchestrator | Saturday 20 September 2025 10:30:44 +0000 (0:00:07.453) 0:03:40.790 **** 2025-09-20 10:30:47.381889 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.381900 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:47.381910 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:47.381921 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:47.381931 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:47.381941 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:47.381952 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:47.381963 | orchestrator | 2025-09-20 10:30:47.381974 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-20 10:30:47.381984 | orchestrator | Saturday 20 September 2025 10:30:45 +0000 (0:00:01.110) 0:03:41.900 **** 2025-09-20 10:30:47.381995 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.382005 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:47.382093 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:47.382106 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:47.382117 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:47.382127 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:47.382137 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:47.382148 | orchestrator | 2025-09-20 10:30:47.382159 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-20 10:30:47.382192 | orchestrator | Saturday 20 September 2025 10:30:46 +0000 (0:00:00.961) 0:03:42.862 **** 2025-09-20 10:30:47.382203 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.382222 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:47.382232 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:47.382242 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:47.382253 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:47.382263 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:47.382273 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:47.382284 | orchestrator | 2025-09-20 10:30:47.382295 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-20 10:30:47.382306 | orchestrator | Saturday 20 September 2025 10:30:46 +0000 (0:00:00.244) 0:03:43.106 **** 2025-09-20 10:30:47.382317 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.382327 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:47.382338 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:47.382348 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:47.382358 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:47.382369 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:30:47.382379 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:30:47.382389 | orchestrator | 2025-09-20 10:30:47.382400 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-20 10:30:47.382411 | orchestrator | Saturday 20 September 2025 10:30:47 +0000 (0:00:00.376) 0:03:43.483 **** 2025-09-20 10:30:47.382421 | orchestrator | ok: [testbed-manager] 2025-09-20 10:30:47.382432 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:30:47.382442 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:30:47.382452 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:30:47.382463 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:30:47.382481 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:31:54.341252 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:31:54.341364 | orchestrator | 2025-09-20 10:31:54.341379 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-20 10:31:54.341392 | orchestrator | Saturday 20 September 2025 10:30:47 +0000 (0:00:00.248) 0:03:43.731 **** 2025-09-20 10:31:54.341402 | orchestrator | ok: [testbed-manager] 2025-09-20 10:31:54.341412 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:31:54.341422 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:31:54.341432 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:31:54.341441 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:31:54.341451 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:31:54.341460 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:31:54.341470 | orchestrator | 2025-09-20 10:31:54.341479 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-20 10:31:54.341489 | orchestrator | Saturday 20 September 2025 10:30:52 +0000 (0:00:05.603) 0:03:49.335 **** 2025-09-20 10:31:54.341501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:31:54.341513 | orchestrator | 2025-09-20 10:31:54.341523 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-20 10:31:54.341533 | orchestrator | Saturday 20 September 2025 10:30:53 +0000 (0:00:00.430) 0:03:49.766 **** 2025-09-20 10:31:54.341543 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341553 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-20 10:31:54.341563 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341572 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-20 10:31:54.341582 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:31:54.341592 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341602 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:31:54.341611 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-20 10:31:54.341621 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:31:54.341630 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341640 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-20 10:31:54.341673 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341696 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:31:54.341706 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-20 10:31:54.341715 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341725 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:31:54.341734 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-20 10:31:54.341744 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:31:54.341753 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-20 10:31:54.341762 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-20 10:31:54.341772 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:31:54.341781 | orchestrator | 2025-09-20 10:31:54.341793 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-20 10:31:54.341804 | orchestrator | Saturday 20 September 2025 10:30:53 +0000 (0:00:00.343) 0:03:50.109 **** 2025-09-20 10:31:54.341815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:31:54.341826 | orchestrator | 2025-09-20 10:31:54.341836 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-20 10:31:54.341848 | orchestrator | Saturday 20 September 2025 10:30:54 +0000 (0:00:00.439) 0:03:50.548 **** 2025-09-20 10:31:54.341858 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-20 10:31:54.341868 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:31:54.341879 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-20 10:31:54.341890 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:31:54.341900 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-20 10:31:54.341911 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-20 10:31:54.341921 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:31:54.341932 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:31:54.341942 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-20 10:31:54.341953 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-20 10:31:54.341964 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:31:54.341974 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:31:54.341984 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-20 10:31:54.341995 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:31:54.342006 | orchestrator | 2025-09-20 10:31:54.342086 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-20 10:31:54.342101 | orchestrator | Saturday 20 September 2025 10:30:54 +0000 (0:00:00.338) 0:03:50.887 **** 2025-09-20 10:31:54.342112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:31:54.342124 | orchestrator | 2025-09-20 10:31:54.342134 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-20 10:31:54.342145 | orchestrator | Saturday 20 September 2025 10:30:54 +0000 (0:00:00.424) 0:03:51.311 **** 2025-09-20 10:31:54.342154 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:31:54.342181 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:31:54.342191 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:31:54.342201 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:31:54.342211 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:31:54.342220 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:31:54.342230 | orchestrator | changed: [testbed-manager] 2025-09-20 10:31:54.342239 | orchestrator | 2025-09-20 10:31:54.342249 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-20 10:31:54.342267 | orchestrator | Saturday 20 September 2025 10:31:28 +0000 (0:00:33.253) 0:04:24.565 **** 2025-09-20 10:31:54.342277 | orchestrator | changed: [testbed-manager] 2025-09-20 10:31:54.342286 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:31:54.342296 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:31:54.342305 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:31:54.342315 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:31:54.342324 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:31:54.342333 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:31:54.342343 | orchestrator | 2025-09-20 10:31:54.342352 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-20 10:31:54.342362 | orchestrator | Saturday 20 September 2025 10:31:35 +0000 (0:00:07.660) 0:04:32.226 **** 2025-09-20 10:31:54.342371 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:31:54.342381 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:31:54.342390 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:31:54.342400 | orchestrator | changed: [testbed-manager] 2025-09-20 10:31:54.342409 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:31:54.342419 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:31:54.342428 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:31:54.342437 | orchestrator | 2025-09-20 10:31:54.342447 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-20 10:31:54.342456 | orchestrator | Saturday 20 September 2025 10:31:43 +0000 (0:00:07.197) 0:04:39.423 **** 2025-09-20 10:31:54.342466 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:31:54.342476 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:31:54.342485 | orchestrator | ok: [testbed-manager] 2025-09-20 10:31:54.342495 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:31:54.342504 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:31:54.342514 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:31:54.342523 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:31:54.342533 | orchestrator | 2025-09-20 10:31:54.342542 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-20 10:31:54.342553 | orchestrator | Saturday 20 September 2025 10:31:44 +0000 (0:00:01.629) 0:04:41.053 **** 2025-09-20 10:31:54.342562 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:31:54.342572 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:31:54.342587 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:31:54.342597 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:31:54.342607 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:31:54.342616 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:31:54.342626 | orchestrator | changed: [testbed-manager] 2025-09-20 10:31:54.342635 | orchestrator | 2025-09-20 10:31:54.342645 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-20 10:31:54.342654 | orchestrator | Saturday 20 September 2025 10:31:50 +0000 (0:00:05.688) 0:04:46.741 **** 2025-09-20 10:31:54.342665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:31:54.342676 | orchestrator | 2025-09-20 10:31:54.342686 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-20 10:31:54.342695 | orchestrator | Saturday 20 September 2025 10:31:51 +0000 (0:00:00.629) 0:04:47.371 **** 2025-09-20 10:31:54.342705 | orchestrator | changed: [testbed-manager] 2025-09-20 10:31:54.342714 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:31:54.342724 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:31:54.342733 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:31:54.342742 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:31:54.342752 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:31:54.342761 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:31:54.342770 | orchestrator | 2025-09-20 10:31:54.342780 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-20 10:31:54.342795 | orchestrator | Saturday 20 September 2025 10:31:51 +0000 (0:00:00.705) 0:04:48.076 **** 2025-09-20 10:31:54.342805 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:31:54.342815 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:31:54.342824 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:31:54.342834 | orchestrator | ok: [testbed-manager] 2025-09-20 10:31:54.342843 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:31:54.342853 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:31:54.342862 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:31:54.342872 | orchestrator | 2025-09-20 10:31:54.342881 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-20 10:31:54.342891 | orchestrator | Saturday 20 September 2025 10:31:53 +0000 (0:00:01.541) 0:04:49.618 **** 2025-09-20 10:31:54.342900 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:31:54.342910 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:31:54.342919 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:31:54.342929 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:31:54.342938 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:31:54.342947 | orchestrator | changed: [testbed-manager] 2025-09-20 10:31:54.342957 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:31:54.342966 | orchestrator | 2025-09-20 10:31:54.342976 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-20 10:31:54.342985 | orchestrator | Saturday 20 September 2025 10:31:54 +0000 (0:00:00.762) 0:04:50.381 **** 2025-09-20 10:31:54.342995 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:31:54.343004 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:31:54.343014 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:31:54.343023 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:31:54.343032 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:31:54.343041 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:31:54.343051 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:31:54.343060 | orchestrator | 2025-09-20 10:31:54.343104 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-20 10:31:54.343121 | orchestrator | Saturday 20 September 2025 10:31:54 +0000 (0:00:00.306) 0:04:50.688 **** 2025-09-20 10:32:19.157805 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:32:19.157929 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:32:19.157944 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:32:19.157956 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:32:19.157966 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:32:19.157978 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:32:19.157989 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:32:19.158000 | orchestrator | 2025-09-20 10:32:19.158012 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-20 10:32:19.158128 | orchestrator | Saturday 20 September 2025 10:31:54 +0000 (0:00:00.389) 0:04:51.077 **** 2025-09-20 10:32:19.158139 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.158152 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:32:19.158163 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:32:19.158173 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:32:19.158184 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:32:19.158195 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:32:19.158205 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:32:19.158216 | orchestrator | 2025-09-20 10:32:19.158227 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-20 10:32:19.158238 | orchestrator | Saturday 20 September 2025 10:31:54 +0000 (0:00:00.242) 0:04:51.320 **** 2025-09-20 10:32:19.158250 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:32:19.158261 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:32:19.158272 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:32:19.158282 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:32:19.158293 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:32:19.158303 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:32:19.158314 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:32:19.158352 | orchestrator | 2025-09-20 10:32:19.158365 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-20 10:32:19.158378 | orchestrator | Saturday 20 September 2025 10:31:55 +0000 (0:00:00.258) 0:04:51.578 **** 2025-09-20 10:32:19.158390 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.158403 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:32:19.158416 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:32:19.158428 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:32:19.158439 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:32:19.158451 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:32:19.158464 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:32:19.158476 | orchestrator | 2025-09-20 10:32:19.158487 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-20 10:32:19.158500 | orchestrator | Saturday 20 September 2025 10:31:55 +0000 (0:00:00.267) 0:04:51.846 **** 2025-09-20 10:32:19.158512 | orchestrator | ok: [testbed-manager] =>  2025-09-20 10:32:19.158524 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158537 | orchestrator | ok: [testbed-node-0] =>  2025-09-20 10:32:19.158549 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158560 | orchestrator | ok: [testbed-node-1] =>  2025-09-20 10:32:19.158572 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158584 | orchestrator | ok: [testbed-node-2] =>  2025-09-20 10:32:19.158597 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158609 | orchestrator | ok: [testbed-node-3] =>  2025-09-20 10:32:19.158621 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158633 | orchestrator | ok: [testbed-node-4] =>  2025-09-20 10:32:19.158643 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158654 | orchestrator | ok: [testbed-node-5] =>  2025-09-20 10:32:19.158664 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 10:32:19.158675 | orchestrator | 2025-09-20 10:32:19.158686 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-20 10:32:19.158696 | orchestrator | Saturday 20 September 2025 10:31:55 +0000 (0:00:00.230) 0:04:52.076 **** 2025-09-20 10:32:19.158707 | orchestrator | ok: [testbed-manager] =>  2025-09-20 10:32:19.158717 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158728 | orchestrator | ok: [testbed-node-0] =>  2025-09-20 10:32:19.158738 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158748 | orchestrator | ok: [testbed-node-1] =>  2025-09-20 10:32:19.158759 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158769 | orchestrator | ok: [testbed-node-2] =>  2025-09-20 10:32:19.158780 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158790 | orchestrator | ok: [testbed-node-3] =>  2025-09-20 10:32:19.158801 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158811 | orchestrator | ok: [testbed-node-4] =>  2025-09-20 10:32:19.158822 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158832 | orchestrator | ok: [testbed-node-5] =>  2025-09-20 10:32:19.158842 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 10:32:19.158853 | orchestrator | 2025-09-20 10:32:19.158863 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-20 10:32:19.158874 | orchestrator | Saturday 20 September 2025 10:31:55 +0000 (0:00:00.241) 0:04:52.318 **** 2025-09-20 10:32:19.158885 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:32:19.158895 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:32:19.158905 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:32:19.158916 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:32:19.158926 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:32:19.158937 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:32:19.158947 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:32:19.158958 | orchestrator | 2025-09-20 10:32:19.158968 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-20 10:32:19.158979 | orchestrator | Saturday 20 September 2025 10:31:56 +0000 (0:00:00.233) 0:04:52.551 **** 2025-09-20 10:32:19.158990 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:32:19.159009 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:32:19.159020 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:32:19.159031 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:32:19.159041 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:32:19.159052 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:32:19.159062 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:32:19.159090 | orchestrator | 2025-09-20 10:32:19.159101 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-20 10:32:19.159112 | orchestrator | Saturday 20 September 2025 10:31:56 +0000 (0:00:00.245) 0:04:52.797 **** 2025-09-20 10:32:19.159140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:32:19.159155 | orchestrator | 2025-09-20 10:32:19.159165 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-20 10:32:19.159176 | orchestrator | Saturday 20 September 2025 10:31:56 +0000 (0:00:00.409) 0:04:53.207 **** 2025-09-20 10:32:19.159187 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.159197 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:32:19.159208 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:32:19.159219 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:32:19.159229 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:32:19.159240 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:32:19.159250 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:32:19.159261 | orchestrator | 2025-09-20 10:32:19.159272 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-20 10:32:19.159283 | orchestrator | Saturday 20 September 2025 10:31:57 +0000 (0:00:00.734) 0:04:53.941 **** 2025-09-20 10:32:19.159293 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:32:19.159304 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:32:19.159314 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:32:19.159325 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:32:19.159335 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.159345 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:32:19.159356 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:32:19.159366 | orchestrator | 2025-09-20 10:32:19.159377 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-20 10:32:19.159388 | orchestrator | Saturday 20 September 2025 10:32:00 +0000 (0:00:03.155) 0:04:57.096 **** 2025-09-20 10:32:19.159399 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-20 10:32:19.159410 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-20 10:32:19.159421 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-20 10:32:19.159431 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-20 10:32:19.159459 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-20 10:32:19.159470 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-20 10:32:19.159481 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:32:19.159491 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-20 10:32:19.159502 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-20 10:32:19.159512 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:32:19.159523 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-20 10:32:19.159533 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-20 10:32:19.159549 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-20 10:32:19.159560 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-20 10:32:19.159571 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:32:19.159581 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-20 10:32:19.159592 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-20 10:32:19.159602 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:32:19.159621 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-20 10:32:19.159632 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-20 10:32:19.159643 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-20 10:32:19.159654 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-20 10:32:19.159664 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:32:19.159675 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:32:19.159685 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-20 10:32:19.159696 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-20 10:32:19.159706 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-20 10:32:19.159717 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:32:19.159727 | orchestrator | 2025-09-20 10:32:19.159738 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-20 10:32:19.159749 | orchestrator | Saturday 20 September 2025 10:32:01 +0000 (0:00:00.546) 0:04:57.642 **** 2025-09-20 10:32:19.159760 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.159770 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:32:19.159780 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:32:19.159791 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:32:19.159802 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:32:19.159812 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:32:19.159822 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:32:19.159833 | orchestrator | 2025-09-20 10:32:19.159843 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-20 10:32:19.159854 | orchestrator | Saturday 20 September 2025 10:32:07 +0000 (0:00:06.079) 0:05:03.722 **** 2025-09-20 10:32:19.159865 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.159875 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:32:19.159886 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:32:19.159896 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:32:19.159907 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:32:19.159917 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:32:19.159928 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:32:19.159938 | orchestrator | 2025-09-20 10:32:19.159949 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-20 10:32:19.159960 | orchestrator | Saturday 20 September 2025 10:32:08 +0000 (0:00:01.137) 0:05:04.859 **** 2025-09-20 10:32:19.159970 | orchestrator | ok: [testbed-manager] 2025-09-20 10:32:19.159981 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:32:19.159991 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:32:19.160001 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:32:19.160012 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:32:19.160022 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:32:19.160033 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:32:19.160043 | orchestrator | 2025-09-20 10:32:19.160054 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-20 10:32:19.160065 | orchestrator | Saturday 20 September 2025 10:32:15 +0000 (0:00:07.469) 0:05:12.328 **** 2025-09-20 10:32:19.160093 | orchestrator | changed: [testbed-manager] 2025-09-20 10:32:19.160104 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:32:19.160115 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:32:19.160132 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.296473 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.296601 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.296617 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.296630 | orchestrator | 2025-09-20 10:33:01.296642 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-20 10:33:01.296656 | orchestrator | Saturday 20 September 2025 10:32:19 +0000 (0:00:03.174) 0:05:15.502 **** 2025-09-20 10:33:01.296667 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.296679 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.296690 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.296723 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.296734 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.296745 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.296755 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.296766 | orchestrator | 2025-09-20 10:33:01.296777 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-20 10:33:01.296788 | orchestrator | Saturday 20 September 2025 10:32:20 +0000 (0:00:01.162) 0:05:16.665 **** 2025-09-20 10:33:01.296799 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.296809 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.296820 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.296830 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.296841 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.296851 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.296862 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.296872 | orchestrator | 2025-09-20 10:33:01.296883 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-20 10:33:01.296894 | orchestrator | Saturday 20 September 2025 10:32:21 +0000 (0:00:01.263) 0:05:17.928 **** 2025-09-20 10:33:01.296904 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.296915 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.296926 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.296936 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.296947 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.296957 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.296968 | orchestrator | changed: [testbed-manager] 2025-09-20 10:33:01.296978 | orchestrator | 2025-09-20 10:33:01.296989 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-20 10:33:01.297002 | orchestrator | Saturday 20 September 2025 10:32:22 +0000 (0:00:00.679) 0:05:18.607 **** 2025-09-20 10:33:01.297015 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.297029 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.297041 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.297095 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.297109 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.297121 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.297134 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.297146 | orchestrator | 2025-09-20 10:33:01.297158 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-20 10:33:01.297171 | orchestrator | Saturday 20 September 2025 10:32:31 +0000 (0:00:09.585) 0:05:28.192 **** 2025-09-20 10:33:01.297183 | orchestrator | changed: [testbed-manager] 2025-09-20 10:33:01.297196 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.297208 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.297221 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.297233 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.297245 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.297257 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.297269 | orchestrator | 2025-09-20 10:33:01.297281 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-20 10:33:01.297293 | orchestrator | Saturday 20 September 2025 10:32:32 +0000 (0:00:00.809) 0:05:29.002 **** 2025-09-20 10:33:01.297306 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.297318 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.297331 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.297343 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.297355 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.297366 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.297376 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.297387 | orchestrator | 2025-09-20 10:33:01.297398 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-20 10:33:01.297408 | orchestrator | Saturday 20 September 2025 10:32:41 +0000 (0:00:08.516) 0:05:37.518 **** 2025-09-20 10:33:01.297428 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.297439 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.297449 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.297460 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.297471 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.297481 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.297492 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.297502 | orchestrator | 2025-09-20 10:33:01.297513 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-20 10:33:01.297524 | orchestrator | Saturday 20 September 2025 10:32:51 +0000 (0:00:10.197) 0:05:47.715 **** 2025-09-20 10:33:01.297534 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-20 10:33:01.297546 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-20 10:33:01.297556 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-20 10:33:01.297567 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-20 10:33:01.297577 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-20 10:33:01.297588 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-20 10:33:01.297598 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-20 10:33:01.297609 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-20 10:33:01.297620 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-20 10:33:01.297630 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-20 10:33:01.297641 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-20 10:33:01.297651 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-20 10:33:01.297662 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-20 10:33:01.297673 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-20 10:33:01.297684 | orchestrator | 2025-09-20 10:33:01.297694 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-20 10:33:01.297724 | orchestrator | Saturday 20 September 2025 10:32:52 +0000 (0:00:01.268) 0:05:48.983 **** 2025-09-20 10:33:01.297736 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:01.297747 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.297757 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.297768 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.297779 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.297789 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.297800 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.297811 | orchestrator | 2025-09-20 10:33:01.297821 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-20 10:33:01.297832 | orchestrator | Saturday 20 September 2025 10:32:53 +0000 (0:00:00.510) 0:05:49.494 **** 2025-09-20 10:33:01.297843 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.297854 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:01.297865 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:01.297875 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:01.297886 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:01.297897 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:01.297907 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:01.297918 | orchestrator | 2025-09-20 10:33:01.297929 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-20 10:33:01.297942 | orchestrator | Saturday 20 September 2025 10:32:56 +0000 (0:00:03.819) 0:05:53.313 **** 2025-09-20 10:33:01.297952 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:01.297963 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.297974 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.297984 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.297995 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.298006 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.298100 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.298124 | orchestrator | 2025-09-20 10:33:01.298136 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-20 10:33:01.298147 | orchestrator | Saturday 20 September 2025 10:32:57 +0000 (0:00:00.492) 0:05:53.806 **** 2025-09-20 10:33:01.298158 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-20 10:33:01.298169 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-20 10:33:01.298180 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:01.298191 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-20 10:33:01.298207 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-20 10:33:01.298219 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.298229 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-20 10:33:01.298240 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-20 10:33:01.298250 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.298261 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-20 10:33:01.298272 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-20 10:33:01.298282 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.298293 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-20 10:33:01.298303 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-20 10:33:01.298314 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.298324 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-20 10:33:01.298335 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-20 10:33:01.298346 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.298356 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-20 10:33:01.298367 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-20 10:33:01.298377 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.298388 | orchestrator | 2025-09-20 10:33:01.298399 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-20 10:33:01.298409 | orchestrator | Saturday 20 September 2025 10:32:58 +0000 (0:00:00.711) 0:05:54.517 **** 2025-09-20 10:33:01.298420 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:01.298431 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.298441 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.298452 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.298462 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.298473 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.298483 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.298494 | orchestrator | 2025-09-20 10:33:01.298505 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-20 10:33:01.298516 | orchestrator | Saturday 20 September 2025 10:32:58 +0000 (0:00:00.517) 0:05:55.034 **** 2025-09-20 10:33:01.298526 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:01.298537 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.298547 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.298558 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.298568 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.298579 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.298589 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.298600 | orchestrator | 2025-09-20 10:33:01.298611 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-20 10:33:01.298621 | orchestrator | Saturday 20 September 2025 10:32:59 +0000 (0:00:00.497) 0:05:55.532 **** 2025-09-20 10:33:01.298632 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:01.298643 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:01.298653 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:01.298664 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:01.298674 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:01.298691 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:01.298702 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:01.298713 | orchestrator | 2025-09-20 10:33:01.298723 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-20 10:33:01.298734 | orchestrator | Saturday 20 September 2025 10:32:59 +0000 (0:00:00.525) 0:05:56.057 **** 2025-09-20 10:33:01.298745 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:01.298764 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.003261 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.003388 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:23.003404 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:23.003416 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:23.003427 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.003438 | orchestrator | 2025-09-20 10:33:23.003451 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-20 10:33:23.003464 | orchestrator | Saturday 20 September 2025 10:33:01 +0000 (0:00:01.585) 0:05:57.643 **** 2025-09-20 10:33:23.003477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:33:23.003490 | orchestrator | 2025-09-20 10:33:23.003501 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-20 10:33:23.003512 | orchestrator | Saturday 20 September 2025 10:33:02 +0000 (0:00:01.016) 0:05:58.659 **** 2025-09-20 10:33:23.003523 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.003533 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.003545 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.003556 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.003567 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.003577 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.003588 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.003599 | orchestrator | 2025-09-20 10:33:23.003610 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-20 10:33:23.003621 | orchestrator | Saturday 20 September 2025 10:33:03 +0000 (0:00:00.806) 0:05:59.467 **** 2025-09-20 10:33:23.003632 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.003643 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.003653 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.003664 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.003676 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.003687 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.003698 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.003709 | orchestrator | 2025-09-20 10:33:23.003720 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-20 10:33:23.003730 | orchestrator | Saturday 20 September 2025 10:33:03 +0000 (0:00:00.803) 0:06:00.270 **** 2025-09-20 10:33:23.003741 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.003752 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.003779 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.003793 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.003805 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.003817 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.003830 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.003842 | orchestrator | 2025-09-20 10:33:23.003855 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-20 10:33:23.003869 | orchestrator | Saturday 20 September 2025 10:33:05 +0000 (0:00:01.273) 0:06:01.544 **** 2025-09-20 10:33:23.003881 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:23.003894 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.003906 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.003918 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.003931 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:23.003943 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:23.003976 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:23.003988 | orchestrator | 2025-09-20 10:33:23.004020 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-20 10:33:23.004033 | orchestrator | Saturday 20 September 2025 10:33:06 +0000 (0:00:01.483) 0:06:03.027 **** 2025-09-20 10:33:23.004045 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.004056 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.004067 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.004078 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.004088 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.004099 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.004110 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.004120 | orchestrator | 2025-09-20 10:33:23.004131 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-20 10:33:23.004142 | orchestrator | Saturday 20 September 2025 10:33:07 +0000 (0:00:01.298) 0:06:04.325 **** 2025-09-20 10:33:23.004153 | orchestrator | changed: [testbed-manager] 2025-09-20 10:33:23.004163 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.004174 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.004184 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.004195 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.004205 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.004216 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.004227 | orchestrator | 2025-09-20 10:33:23.004238 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-20 10:33:23.004283 | orchestrator | Saturday 20 September 2025 10:33:09 +0000 (0:00:01.354) 0:06:05.680 **** 2025-09-20 10:33:23.004306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:33:23.004318 | orchestrator | 2025-09-20 10:33:23.004329 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-20 10:33:23.004340 | orchestrator | Saturday 20 September 2025 10:33:10 +0000 (0:00:01.038) 0:06:06.718 **** 2025-09-20 10:33:23.004350 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.004361 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.004372 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.004383 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.004394 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:23.004404 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:23.004415 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:23.004425 | orchestrator | 2025-09-20 10:33:23.004436 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-20 10:33:23.004447 | orchestrator | Saturday 20 September 2025 10:33:11 +0000 (0:00:01.279) 0:06:07.998 **** 2025-09-20 10:33:23.004458 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.004468 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.004498 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.004509 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.004520 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:23.004531 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:23.004541 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:23.004552 | orchestrator | 2025-09-20 10:33:23.004562 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-20 10:33:23.004573 | orchestrator | Saturday 20 September 2025 10:33:13 +0000 (0:00:02.109) 0:06:10.108 **** 2025-09-20 10:33:23.004584 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.004595 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.004605 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.004616 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.004626 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:23.004637 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:23.004647 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:23.004658 | orchestrator | 2025-09-20 10:33:23.004668 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-20 10:33:23.004688 | orchestrator | Saturday 20 September 2025 10:33:14 +0000 (0:00:01.034) 0:06:11.142 **** 2025-09-20 10:33:23.004699 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.004710 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.004720 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.004731 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.004742 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:23.004752 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:23.004763 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:23.004774 | orchestrator | 2025-09-20 10:33:23.004785 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-20 10:33:23.004795 | orchestrator | Saturday 20 September 2025 10:33:15 +0000 (0:00:01.026) 0:06:12.169 **** 2025-09-20 10:33:23.004806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:33:23.004817 | orchestrator | 2025-09-20 10:33:23.004828 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.004838 | orchestrator | Saturday 20 September 2025 10:33:16 +0000 (0:00:01.083) 0:06:13.253 **** 2025-09-20 10:33:23.004849 | orchestrator | 2025-09-20 10:33:23.004860 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.004870 | orchestrator | Saturday 20 September 2025 10:33:16 +0000 (0:00:00.040) 0:06:13.293 **** 2025-09-20 10:33:23.004881 | orchestrator | 2025-09-20 10:33:23.004892 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.004903 | orchestrator | Saturday 20 September 2025 10:33:16 +0000 (0:00:00.039) 0:06:13.333 **** 2025-09-20 10:33:23.004913 | orchestrator | 2025-09-20 10:33:23.004924 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.004935 | orchestrator | Saturday 20 September 2025 10:33:17 +0000 (0:00:00.047) 0:06:13.380 **** 2025-09-20 10:33:23.004945 | orchestrator | 2025-09-20 10:33:23.004956 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.004966 | orchestrator | Saturday 20 September 2025 10:33:17 +0000 (0:00:00.039) 0:06:13.420 **** 2025-09-20 10:33:23.004977 | orchestrator | 2025-09-20 10:33:23.004988 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.005028 | orchestrator | Saturday 20 September 2025 10:33:17 +0000 (0:00:00.038) 0:06:13.458 **** 2025-09-20 10:33:23.005040 | orchestrator | 2025-09-20 10:33:23.005051 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 10:33:23.005062 | orchestrator | Saturday 20 September 2025 10:33:17 +0000 (0:00:00.046) 0:06:13.505 **** 2025-09-20 10:33:23.005072 | orchestrator | 2025-09-20 10:33:23.005083 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-20 10:33:23.005094 | orchestrator | Saturday 20 September 2025 10:33:17 +0000 (0:00:00.039) 0:06:13.545 **** 2025-09-20 10:33:23.005104 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:23.005115 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:23.005126 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:23.005136 | orchestrator | 2025-09-20 10:33:23.005147 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-20 10:33:23.005158 | orchestrator | Saturday 20 September 2025 10:33:18 +0000 (0:00:01.162) 0:06:14.707 **** 2025-09-20 10:33:23.005168 | orchestrator | changed: [testbed-manager] 2025-09-20 10:33:23.005179 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.005190 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.005200 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.005211 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.005221 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.005232 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.005242 | orchestrator | 2025-09-20 10:33:23.005253 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-20 10:33:23.005270 | orchestrator | Saturday 20 September 2025 10:33:19 +0000 (0:00:01.237) 0:06:15.944 **** 2025-09-20 10:33:23.005281 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:23.005292 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.005302 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.005313 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.005323 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:23.005334 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:23.005344 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:23.005355 | orchestrator | 2025-09-20 10:33:23.005366 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-20 10:33:23.005376 | orchestrator | Saturday 20 September 2025 10:33:21 +0000 (0:00:02.409) 0:06:18.354 **** 2025-09-20 10:33:23.005387 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:23.005397 | orchestrator | 2025-09-20 10:33:23.005416 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-20 10:33:23.005427 | orchestrator | Saturday 20 September 2025 10:33:22 +0000 (0:00:00.108) 0:06:18.463 **** 2025-09-20 10:33:23.005438 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:23.005449 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:23.005459 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:23.005470 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:23.005487 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:47.963894 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:47.964069 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:47.964085 | orchestrator | 2025-09-20 10:33:47.964098 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-20 10:33:47.964110 | orchestrator | Saturday 20 September 2025 10:33:22 +0000 (0:00:00.888) 0:06:19.351 **** 2025-09-20 10:33:47.964123 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:47.964134 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:47.964145 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:47.964156 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:47.964166 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:47.964177 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:47.964188 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:47.964198 | orchestrator | 2025-09-20 10:33:47.964209 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-20 10:33:47.964220 | orchestrator | Saturday 20 September 2025 10:33:23 +0000 (0:00:00.469) 0:06:19.820 **** 2025-09-20 10:33:47.964232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:33:47.964246 | orchestrator | 2025-09-20 10:33:47.964256 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-20 10:33:47.964267 | orchestrator | Saturday 20 September 2025 10:33:24 +0000 (0:00:00.901) 0:06:20.722 **** 2025-09-20 10:33:47.964278 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.964291 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:47.964302 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:47.964312 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:47.964323 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:47.964333 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:47.964344 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:47.964355 | orchestrator | 2025-09-20 10:33:47.964365 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-20 10:33:47.964376 | orchestrator | Saturday 20 September 2025 10:33:25 +0000 (0:00:00.739) 0:06:21.462 **** 2025-09-20 10:33:47.964387 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-20 10:33:47.964398 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-20 10:33:47.964409 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-20 10:33:47.964462 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-20 10:33:47.964476 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-20 10:33:47.964488 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-20 10:33:47.964500 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-20 10:33:47.964512 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-20 10:33:47.964524 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-20 10:33:47.964536 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-20 10:33:47.964548 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-20 10:33:47.964560 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-20 10:33:47.964572 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-20 10:33:47.964585 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-20 10:33:47.964597 | orchestrator | 2025-09-20 10:33:47.964610 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-20 10:33:47.964622 | orchestrator | Saturday 20 September 2025 10:33:27 +0000 (0:00:02.378) 0:06:23.840 **** 2025-09-20 10:33:47.964634 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:47.964646 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:47.964658 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:47.964670 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:47.964682 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:47.964694 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:47.964706 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:47.964717 | orchestrator | 2025-09-20 10:33:47.964729 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-20 10:33:47.964741 | orchestrator | Saturday 20 September 2025 10:33:28 +0000 (0:00:00.544) 0:06:24.385 **** 2025-09-20 10:33:47.964756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:33:47.964771 | orchestrator | 2025-09-20 10:33:47.964782 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-20 10:33:47.964793 | orchestrator | Saturday 20 September 2025 10:33:29 +0000 (0:00:01.004) 0:06:25.389 **** 2025-09-20 10:33:47.964803 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.964814 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:47.964825 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:47.964835 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:47.964845 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:47.964856 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:47.964866 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:47.964876 | orchestrator | 2025-09-20 10:33:47.964887 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-20 10:33:47.964897 | orchestrator | Saturday 20 September 2025 10:33:29 +0000 (0:00:00.845) 0:06:26.235 **** 2025-09-20 10:33:47.964908 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.964919 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:47.964953 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:47.964964 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:47.964974 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:47.964984 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:47.964995 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:47.965005 | orchestrator | 2025-09-20 10:33:47.965016 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-20 10:33:47.965043 | orchestrator | Saturday 20 September 2025 10:33:30 +0000 (0:00:00.833) 0:06:27.069 **** 2025-09-20 10:33:47.965055 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:47.965065 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:47.965076 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:47.965086 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:47.965108 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:47.965119 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:47.965130 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:47.965140 | orchestrator | 2025-09-20 10:33:47.965151 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-20 10:33:47.965161 | orchestrator | Saturday 20 September 2025 10:33:31 +0000 (0:00:00.510) 0:06:27.579 **** 2025-09-20 10:33:47.965172 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:47.965183 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.965193 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:47.965204 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:47.965214 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:47.965224 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:47.965235 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:47.965245 | orchestrator | 2025-09-20 10:33:47.965256 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-20 10:33:47.965266 | orchestrator | Saturday 20 September 2025 10:33:32 +0000 (0:00:01.535) 0:06:29.115 **** 2025-09-20 10:33:47.965277 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:47.965288 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:47.965298 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:47.965309 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:47.965319 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:47.965329 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:47.965340 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:47.965350 | orchestrator | 2025-09-20 10:33:47.965361 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-20 10:33:47.965372 | orchestrator | Saturday 20 September 2025 10:33:33 +0000 (0:00:00.472) 0:06:29.587 **** 2025-09-20 10:33:47.965382 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.965393 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:47.965403 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:47.965414 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:47.965424 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:47.965435 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:47.965445 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:47.965456 | orchestrator | 2025-09-20 10:33:47.965466 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-20 10:33:47.965482 | orchestrator | Saturday 20 September 2025 10:33:40 +0000 (0:00:07.650) 0:06:37.238 **** 2025-09-20 10:33:47.965493 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.965504 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:47.965515 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:47.965525 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:47.965535 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:47.965546 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:47.965556 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:47.965567 | orchestrator | 2025-09-20 10:33:47.965578 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-20 10:33:47.965588 | orchestrator | Saturday 20 September 2025 10:33:42 +0000 (0:00:01.319) 0:06:38.558 **** 2025-09-20 10:33:47.965599 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.965609 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:47.965620 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:47.965630 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:47.965641 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:47.965652 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:47.965662 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:47.965673 | orchestrator | 2025-09-20 10:33:47.965683 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-20 10:33:47.965694 | orchestrator | Saturday 20 September 2025 10:33:43 +0000 (0:00:01.695) 0:06:40.253 **** 2025-09-20 10:33:47.965705 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.965722 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:33:47.965733 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:33:47.965743 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:33:47.965754 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:33:47.965764 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:33:47.965775 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:33:47.965785 | orchestrator | 2025-09-20 10:33:47.965796 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 10:33:47.965807 | orchestrator | Saturday 20 September 2025 10:33:45 +0000 (0:00:01.878) 0:06:42.132 **** 2025-09-20 10:33:47.965817 | orchestrator | ok: [testbed-manager] 2025-09-20 10:33:47.965828 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:33:47.965838 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:33:47.965849 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:33:47.965860 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:33:47.965870 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:33:47.965880 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:33:47.965891 | orchestrator | 2025-09-20 10:33:47.965901 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 10:33:47.965912 | orchestrator | Saturday 20 September 2025 10:33:46 +0000 (0:00:00.847) 0:06:42.979 **** 2025-09-20 10:33:47.965939 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:47.965951 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:47.965961 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:47.965972 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:47.965982 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:47.965993 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:47.966003 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:47.966078 | orchestrator | 2025-09-20 10:33:47.966093 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-20 10:33:47.966104 | orchestrator | Saturday 20 September 2025 10:33:47 +0000 (0:00:00.879) 0:06:43.859 **** 2025-09-20 10:33:47.966114 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:33:47.966125 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:33:47.966136 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:33:47.966146 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:33:47.966156 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:33:47.966167 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:33:47.966178 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:33:47.966188 | orchestrator | 2025-09-20 10:33:47.966207 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-20 10:34:19.023270 | orchestrator | Saturday 20 September 2025 10:33:47 +0000 (0:00:00.449) 0:06:44.309 **** 2025-09-20 10:34:19.023385 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.023402 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.023413 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.023424 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.023435 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.023446 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.023458 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.023469 | orchestrator | 2025-09-20 10:34:19.023481 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-20 10:34:19.023493 | orchestrator | Saturday 20 September 2025 10:33:48 +0000 (0:00:00.497) 0:06:44.807 **** 2025-09-20 10:34:19.023504 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.023515 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.023526 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.023536 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.023547 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.023558 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.023568 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.023579 | orchestrator | 2025-09-20 10:34:19.023590 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-20 10:34:19.023601 | orchestrator | Saturday 20 September 2025 10:33:48 +0000 (0:00:00.435) 0:06:45.243 **** 2025-09-20 10:34:19.023638 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.023649 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.023660 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.023670 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.023681 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.023691 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.023702 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.023712 | orchestrator | 2025-09-20 10:34:19.023723 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-20 10:34:19.023734 | orchestrator | Saturday 20 September 2025 10:33:49 +0000 (0:00:00.453) 0:06:45.696 **** 2025-09-20 10:34:19.023745 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.023755 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.023802 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.023815 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.023828 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.023840 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.023853 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.023865 | orchestrator | 2025-09-20 10:34:19.023878 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-20 10:34:19.023903 | orchestrator | Saturday 20 September 2025 10:33:54 +0000 (0:00:05.393) 0:06:51.090 **** 2025-09-20 10:34:19.023917 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:34:19.023931 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:34:19.023943 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:34:19.023956 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:34:19.023968 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:34:19.023981 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:34:19.023994 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:34:19.024006 | orchestrator | 2025-09-20 10:34:19.024018 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-20 10:34:19.024031 | orchestrator | Saturday 20 September 2025 10:33:55 +0000 (0:00:00.549) 0:06:51.639 **** 2025-09-20 10:34:19.024045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:34:19.024062 | orchestrator | 2025-09-20 10:34:19.024074 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-20 10:34:19.024087 | orchestrator | Saturday 20 September 2025 10:33:56 +0000 (0:00:00.798) 0:06:52.438 **** 2025-09-20 10:34:19.024099 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.024111 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.024124 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.024136 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.024149 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.024160 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.024171 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.024182 | orchestrator | 2025-09-20 10:34:19.024193 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-20 10:34:19.024203 | orchestrator | Saturday 20 September 2025 10:33:58 +0000 (0:00:01.956) 0:06:54.395 **** 2025-09-20 10:34:19.024214 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.024225 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.024236 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.024246 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.024256 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.024267 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.024277 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.024288 | orchestrator | 2025-09-20 10:34:19.024299 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-20 10:34:19.024310 | orchestrator | Saturday 20 September 2025 10:33:59 +0000 (0:00:01.145) 0:06:55.540 **** 2025-09-20 10:34:19.024320 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.024331 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.024350 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.024361 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.024371 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.024382 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.024392 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.024403 | orchestrator | 2025-09-20 10:34:19.024414 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-20 10:34:19.024425 | orchestrator | Saturday 20 September 2025 10:34:00 +0000 (0:00:00.925) 0:06:56.465 **** 2025-09-20 10:34:19.024436 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024448 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024459 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024486 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024498 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024508 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024519 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 10:34:19.024530 | orchestrator | 2025-09-20 10:34:19.024541 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-20 10:34:19.024552 | orchestrator | Saturday 20 September 2025 10:34:01 +0000 (0:00:01.646) 0:06:58.112 **** 2025-09-20 10:34:19.024563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:34:19.024574 | orchestrator | 2025-09-20 10:34:19.024585 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-20 10:34:19.024596 | orchestrator | Saturday 20 September 2025 10:34:02 +0000 (0:00:01.024) 0:06:59.137 **** 2025-09-20 10:34:19.024606 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:19.024617 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:19.024628 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:19.024638 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:19.024649 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:19.024659 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:19.024670 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:19.024681 | orchestrator | 2025-09-20 10:34:19.024691 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-20 10:34:19.024702 | orchestrator | Saturday 20 September 2025 10:34:11 +0000 (0:00:08.814) 0:07:07.952 **** 2025-09-20 10:34:19.024713 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.024728 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.024739 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.024750 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.024760 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.024787 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.024798 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.024809 | orchestrator | 2025-09-20 10:34:19.024820 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-20 10:34:19.024831 | orchestrator | Saturday 20 September 2025 10:34:13 +0000 (0:00:01.871) 0:07:09.823 **** 2025-09-20 10:34:19.024842 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.024852 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.024870 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.024881 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.024891 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.024902 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.024912 | orchestrator | 2025-09-20 10:34:19.024923 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-20 10:34:19.024934 | orchestrator | Saturday 20 September 2025 10:34:14 +0000 (0:00:01.257) 0:07:11.081 **** 2025-09-20 10:34:19.024945 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:19.024956 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:19.024967 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:19.024978 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:19.024989 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:19.025000 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:19.025010 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:19.025021 | orchestrator | 2025-09-20 10:34:19.025032 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-20 10:34:19.025043 | orchestrator | 2025-09-20 10:34:19.025054 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-20 10:34:19.025064 | orchestrator | Saturday 20 September 2025 10:34:15 +0000 (0:00:01.171) 0:07:12.253 **** 2025-09-20 10:34:19.025075 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:34:19.025086 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:34:19.025097 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:34:19.025108 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:34:19.025118 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:34:19.025129 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:34:19.025140 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:34:19.025151 | orchestrator | 2025-09-20 10:34:19.025161 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-20 10:34:19.025172 | orchestrator | 2025-09-20 10:34:19.025183 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-20 10:34:19.025194 | orchestrator | Saturday 20 September 2025 10:34:16 +0000 (0:00:00.448) 0:07:12.702 **** 2025-09-20 10:34:19.025205 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:19.025215 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:19.025226 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:19.025237 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:19.025247 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:19.025258 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:19.025269 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:19.025279 | orchestrator | 2025-09-20 10:34:19.025290 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-20 10:34:19.025301 | orchestrator | Saturday 20 September 2025 10:34:17 +0000 (0:00:01.184) 0:07:13.886 **** 2025-09-20 10:34:19.025312 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:19.025323 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:19.025333 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:19.025344 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:19.025355 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:19.025366 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:19.025376 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:19.025387 | orchestrator | 2025-09-20 10:34:19.025398 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-20 10:34:19.025416 | orchestrator | Saturday 20 September 2025 10:34:19 +0000 (0:00:01.478) 0:07:15.365 **** 2025-09-20 10:34:42.115760 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:34:42.115880 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:34:42.115896 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:34:42.115908 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:34:42.115919 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:34:42.115930 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:34:42.115941 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:34:42.115952 | orchestrator | 2025-09-20 10:34:42.115989 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-20 10:34:42.116002 | orchestrator | Saturday 20 September 2025 10:34:19 +0000 (0:00:00.416) 0:07:15.781 **** 2025-09-20 10:34:42.116014 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:34:42.116026 | orchestrator | 2025-09-20 10:34:42.116037 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-20 10:34:42.116047 | orchestrator | Saturday 20 September 2025 10:34:20 +0000 (0:00:00.788) 0:07:16.570 **** 2025-09-20 10:34:42.116059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:34:42.116073 | orchestrator | 2025-09-20 10:34:42.116084 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-20 10:34:42.116095 | orchestrator | Saturday 20 September 2025 10:34:20 +0000 (0:00:00.702) 0:07:17.273 **** 2025-09-20 10:34:42.116105 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.116116 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.116127 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.116137 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.116147 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.116158 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.116168 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.116179 | orchestrator | 2025-09-20 10:34:42.116189 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-20 10:34:42.116201 | orchestrator | Saturday 20 September 2025 10:34:29 +0000 (0:00:08.100) 0:07:25.374 **** 2025-09-20 10:34:42.116211 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.116222 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.116233 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.116245 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.116257 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.116268 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.116281 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.116292 | orchestrator | 2025-09-20 10:34:42.116304 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-20 10:34:42.116316 | orchestrator | Saturday 20 September 2025 10:34:29 +0000 (0:00:00.835) 0:07:26.209 **** 2025-09-20 10:34:42.116328 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.116340 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.116351 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.116364 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.116376 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.116388 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.116400 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.116412 | orchestrator | 2025-09-20 10:34:42.116424 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-20 10:34:42.116437 | orchestrator | Saturday 20 September 2025 10:34:31 +0000 (0:00:01.452) 0:07:27.661 **** 2025-09-20 10:34:42.116448 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.116460 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.116472 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.116484 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.116496 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.116508 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.116520 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.116532 | orchestrator | 2025-09-20 10:34:42.116544 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-20 10:34:42.116556 | orchestrator | Saturday 20 September 2025 10:34:32 +0000 (0:00:01.623) 0:07:29.285 **** 2025-09-20 10:34:42.116568 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.116587 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.116598 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.116608 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.116619 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.116654 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.116665 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.116676 | orchestrator | 2025-09-20 10:34:42.116687 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-20 10:34:42.116697 | orchestrator | Saturday 20 September 2025 10:34:34 +0000 (0:00:01.106) 0:07:30.391 **** 2025-09-20 10:34:42.116708 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.116719 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.116729 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.116740 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.116750 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.116761 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.116771 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.116782 | orchestrator | 2025-09-20 10:34:42.116793 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-20 10:34:42.116804 | orchestrator | 2025-09-20 10:34:42.116814 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-20 10:34:42.116825 | orchestrator | Saturday 20 September 2025 10:34:36 +0000 (0:00:02.161) 0:07:32.552 **** 2025-09-20 10:34:42.116836 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:34:42.116847 | orchestrator | 2025-09-20 10:34:42.116858 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-20 10:34:42.116886 | orchestrator | Saturday 20 September 2025 10:34:37 +0000 (0:00:00.849) 0:07:33.402 **** 2025-09-20 10:34:42.116897 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:42.116909 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:42.116920 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:42.116931 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:42.116941 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:42.116952 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:42.116962 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:42.116973 | orchestrator | 2025-09-20 10:34:42.116984 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-20 10:34:42.116995 | orchestrator | Saturday 20 September 2025 10:34:37 +0000 (0:00:00.804) 0:07:34.206 **** 2025-09-20 10:34:42.117005 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.117064 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.117076 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.117087 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.117097 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.117108 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.117118 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.117129 | orchestrator | 2025-09-20 10:34:42.117140 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-20 10:34:42.117150 | orchestrator | Saturday 20 September 2025 10:34:39 +0000 (0:00:01.273) 0:07:35.479 **** 2025-09-20 10:34:42.117161 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:34:42.117172 | orchestrator | 2025-09-20 10:34:42.117183 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-20 10:34:42.117193 | orchestrator | Saturday 20 September 2025 10:34:39 +0000 (0:00:00.835) 0:07:36.314 **** 2025-09-20 10:34:42.117204 | orchestrator | ok: [testbed-manager] 2025-09-20 10:34:42.117214 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:34:42.117225 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:34:42.117236 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:34:42.117246 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:34:42.117264 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:34:42.117275 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:34:42.117286 | orchestrator | 2025-09-20 10:34:42.117296 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-20 10:34:42.117307 | orchestrator | Saturday 20 September 2025 10:34:40 +0000 (0:00:00.835) 0:07:37.150 **** 2025-09-20 10:34:42.117322 | orchestrator | changed: [testbed-manager] 2025-09-20 10:34:42.117333 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:34:42.117344 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:34:42.117355 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:34:42.117365 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:34:42.117376 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:34:42.117386 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:34:42.117397 | orchestrator | 2025-09-20 10:34:42.117407 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:34:42.117419 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-20 10:34:42.117431 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-20 10:34:42.117442 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 10:34:42.117453 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 10:34:42.117463 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 10:34:42.117474 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 10:34:42.117485 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 10:34:42.117495 | orchestrator | 2025-09-20 10:34:42.117506 | orchestrator | 2025-09-20 10:34:42.117517 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:34:42.117528 | orchestrator | Saturday 20 September 2025 10:34:42 +0000 (0:00:01.298) 0:07:38.449 **** 2025-09-20 10:34:42.117539 | orchestrator | =============================================================================== 2025-09-20 10:34:42.117549 | orchestrator | osism.commons.packages : Install required packages --------------------- 72.70s 2025-09-20 10:34:42.117560 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.60s 2025-09-20 10:34:42.117571 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.25s 2025-09-20 10:34:42.117582 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.91s 2025-09-20 10:34:42.117592 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.92s 2025-09-20 10:34:42.117603 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.67s 2025-09-20 10:34:42.117615 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.20s 2025-09-20 10:34:42.117642 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.59s 2025-09-20 10:34:42.117653 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.81s 2025-09-20 10:34:42.117663 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.52s 2025-09-20 10:34:42.117681 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.10s 2025-09-20 10:34:42.386306 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.66s 2025-09-20 10:34:42.386390 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.65s 2025-09-20 10:34:42.386422 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.47s 2025-09-20 10:34:42.386432 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.45s 2025-09-20 10:34:42.386441 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.20s 2025-09-20 10:34:42.386451 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.08s 2025-09-20 10:34:42.386460 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.69s 2025-09-20 10:34:42.386470 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.60s 2025-09-20 10:34:42.386480 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.50s 2025-09-20 10:34:42.592432 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-20 10:34:42.592517 | orchestrator | + osism apply network 2025-09-20 10:34:55.035684 | orchestrator | 2025-09-20 10:34:55 | INFO  | Task 47a79817-9781-4c51-b022-4c1da4b50c44 (network) was prepared for execution. 2025-09-20 10:34:55.035806 | orchestrator | 2025-09-20 10:34:55 | INFO  | It takes a moment until task 47a79817-9781-4c51-b022-4c1da4b50c44 (network) has been started and output is visible here. 2025-09-20 10:35:22.054351 | orchestrator | 2025-09-20 10:35:22.054473 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-20 10:35:22.054490 | orchestrator | 2025-09-20 10:35:22.054503 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-20 10:35:22.054516 | orchestrator | Saturday 20 September 2025 10:34:59 +0000 (0:00:00.297) 0:00:00.297 **** 2025-09-20 10:35:22.054527 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.054540 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.054552 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.054564 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.054575 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.054586 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.054667 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.054679 | orchestrator | 2025-09-20 10:35:22.054691 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-20 10:35:22.054702 | orchestrator | Saturday 20 September 2025 10:34:59 +0000 (0:00:00.659) 0:00:00.957 **** 2025-09-20 10:35:22.054715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:35:22.054729 | orchestrator | 2025-09-20 10:35:22.054740 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-20 10:35:22.054751 | orchestrator | Saturday 20 September 2025 10:35:00 +0000 (0:00:01.092) 0:00:02.049 **** 2025-09-20 10:35:22.054762 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.054773 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.054784 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.054795 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.054806 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.054816 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.054827 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.054840 | orchestrator | 2025-09-20 10:35:22.054853 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-20 10:35:22.054866 | orchestrator | Saturday 20 September 2025 10:35:02 +0000 (0:00:01.844) 0:00:03.894 **** 2025-09-20 10:35:22.054879 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.054892 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.054905 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.054917 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.054929 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.054942 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.054954 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.054967 | orchestrator | 2025-09-20 10:35:22.054979 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-20 10:35:22.055015 | orchestrator | Saturday 20 September 2025 10:35:04 +0000 (0:00:01.562) 0:00:05.457 **** 2025-09-20 10:35:22.055028 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-20 10:35:22.055041 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-20 10:35:22.055054 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-20 10:35:22.055067 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-20 10:35:22.055080 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-20 10:35:22.055092 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-20 10:35:22.055105 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-20 10:35:22.055117 | orchestrator | 2025-09-20 10:35:22.055129 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-20 10:35:22.055142 | orchestrator | Saturday 20 September 2025 10:35:05 +0000 (0:00:00.897) 0:00:06.354 **** 2025-09-20 10:35:22.055155 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:35:22.055169 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 10:35:22.055181 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:35:22.055192 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 10:35:22.055203 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 10:35:22.055214 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 10:35:22.055224 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 10:35:22.055235 | orchestrator | 2025-09-20 10:35:22.055246 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-20 10:35:22.055257 | orchestrator | Saturday 20 September 2025 10:35:08 +0000 (0:00:03.239) 0:00:09.594 **** 2025-09-20 10:35:22.055268 | orchestrator | changed: [testbed-manager] 2025-09-20 10:35:22.055279 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:35:22.055290 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:35:22.055300 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:35:22.055311 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:35:22.055321 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:35:22.055332 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:35:22.055343 | orchestrator | 2025-09-20 10:35:22.055354 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-20 10:35:22.055365 | orchestrator | Saturday 20 September 2025 10:35:09 +0000 (0:00:01.365) 0:00:10.960 **** 2025-09-20 10:35:22.055376 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:35:22.055386 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 10:35:22.055397 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 10:35:22.055408 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:35:22.055418 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 10:35:22.055429 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 10:35:22.055440 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 10:35:22.055450 | orchestrator | 2025-09-20 10:35:22.055461 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-20 10:35:22.055472 | orchestrator | Saturday 20 September 2025 10:35:11 +0000 (0:00:01.872) 0:00:12.832 **** 2025-09-20 10:35:22.055483 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.055494 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.055505 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.055515 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.055526 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.055537 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.055547 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.055558 | orchestrator | 2025-09-20 10:35:22.055569 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-20 10:35:22.055618 | orchestrator | Saturday 20 September 2025 10:35:12 +0000 (0:00:01.051) 0:00:13.884 **** 2025-09-20 10:35:22.055631 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:35:22.055642 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:35:22.055653 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:35:22.055673 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:35:22.055684 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:35:22.055695 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:35:22.055706 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:35:22.055718 | orchestrator | 2025-09-20 10:35:22.055729 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-20 10:35:22.055755 | orchestrator | Saturday 20 September 2025 10:35:13 +0000 (0:00:00.639) 0:00:14.524 **** 2025-09-20 10:35:22.055766 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.055777 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.055788 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.055799 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.055810 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.055821 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.055832 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.055843 | orchestrator | 2025-09-20 10:35:22.055854 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-20 10:35:22.055865 | orchestrator | Saturday 20 September 2025 10:35:15 +0000 (0:00:02.018) 0:00:16.542 **** 2025-09-20 10:35:22.055876 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:35:22.055887 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:35:22.055898 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:35:22.055909 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:35:22.055920 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:35:22.055931 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:35:22.055943 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-20 10:35:22.055956 | orchestrator | 2025-09-20 10:35:22.055967 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-20 10:35:22.055978 | orchestrator | Saturday 20 September 2025 10:35:16 +0000 (0:00:00.908) 0:00:17.450 **** 2025-09-20 10:35:22.055989 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.056000 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:35:22.056011 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:35:22.056022 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:35:22.056033 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:35:22.056044 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:35:22.056055 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:35:22.056066 | orchestrator | 2025-09-20 10:35:22.056077 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-20 10:35:22.056088 | orchestrator | Saturday 20 September 2025 10:35:17 +0000 (0:00:01.613) 0:00:19.063 **** 2025-09-20 10:35:22.056100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:35:22.056112 | orchestrator | 2025-09-20 10:35:22.056123 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-20 10:35:22.056134 | orchestrator | Saturday 20 September 2025 10:35:19 +0000 (0:00:01.364) 0:00:20.428 **** 2025-09-20 10:35:22.056145 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.056156 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.056167 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.056178 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.056188 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.056199 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.056210 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.056221 | orchestrator | 2025-09-20 10:35:22.056232 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-20 10:35:22.056243 | orchestrator | Saturday 20 September 2025 10:35:20 +0000 (0:00:00.942) 0:00:21.370 **** 2025-09-20 10:35:22.056254 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:22.056265 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:22.056275 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:22.056292 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:22.056303 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:22.056314 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:22.056324 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:22.056335 | orchestrator | 2025-09-20 10:35:22.056346 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-20 10:35:22.056357 | orchestrator | Saturday 20 September 2025 10:35:20 +0000 (0:00:00.701) 0:00:22.071 **** 2025-09-20 10:35:22.056368 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056379 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056390 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056401 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056411 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056422 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056433 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056444 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056454 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056465 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 10:35:22.056476 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056487 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056497 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056509 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 10:35:22.056519 | orchestrator | 2025-09-20 10:35:22.056538 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-20 10:35:38.746115 | orchestrator | Saturday 20 September 2025 10:35:22 +0000 (0:00:01.108) 0:00:23.180 **** 2025-09-20 10:35:38.746234 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:35:38.746250 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:35:38.746263 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:35:38.746274 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:35:38.746285 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:35:38.746296 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:35:38.746307 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:35:38.746318 | orchestrator | 2025-09-20 10:35:38.746346 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-20 10:35:38.746358 | orchestrator | Saturday 20 September 2025 10:35:22 +0000 (0:00:00.589) 0:00:23.769 **** 2025-09-20 10:35:38.746371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-5, testbed-node-4, testbed-node-3 2025-09-20 10:35:38.746385 | orchestrator | 2025-09-20 10:35:38.746396 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-20 10:35:38.746407 | orchestrator | Saturday 20 September 2025 10:35:26 +0000 (0:00:04.141) 0:00:27.911 **** 2025-09-20 10:35:38.746419 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746477 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746668 | orchestrator | 2025-09-20 10:35:38.746681 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-20 10:35:38.746693 | orchestrator | Saturday 20 September 2025 10:35:32 +0000 (0:00:05.942) 0:00:33.853 **** 2025-09-20 10:35:38.746706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746727 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746788 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-20 10:35:38.746838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:38.746875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:44.828129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-20 10:35:44.828243 | orchestrator | 2025-09-20 10:35:44.828259 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-20 10:35:44.828273 | orchestrator | Saturday 20 September 2025 10:35:38 +0000 (0:00:06.018) 0:00:39.872 **** 2025-09-20 10:35:44.828308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:35:44.828320 | orchestrator | 2025-09-20 10:35:44.828332 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-20 10:35:44.828343 | orchestrator | Saturday 20 September 2025 10:35:40 +0000 (0:00:01.294) 0:00:41.167 **** 2025-09-20 10:35:44.828353 | orchestrator | ok: [testbed-manager] 2025-09-20 10:35:44.828366 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:35:44.828377 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:35:44.828387 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:35:44.828397 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:35:44.828408 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:35:44.828419 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:35:44.828429 | orchestrator | 2025-09-20 10:35:44.828440 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-20 10:35:44.828451 | orchestrator | Saturday 20 September 2025 10:35:41 +0000 (0:00:01.168) 0:00:42.336 **** 2025-09-20 10:35:44.828462 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828473 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828484 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828495 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828505 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:35:44.828517 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828527 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828538 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828575 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828586 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:35:44.828596 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828607 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828618 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828628 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828639 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:35:44.828651 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828663 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828691 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828704 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828716 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:35:44.828728 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828740 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828752 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828763 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828776 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:35:44.828788 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828800 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828819 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828831 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828841 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:35:44.828852 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 10:35:44.828863 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 10:35:44.828873 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 10:35:44.828884 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 10:35:44.828894 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:35:44.828905 | orchestrator | 2025-09-20 10:35:44.828916 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-20 10:35:44.828945 | orchestrator | Saturday 20 September 2025 10:35:43 +0000 (0:00:02.039) 0:00:44.375 **** 2025-09-20 10:35:44.828956 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:35:44.828967 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:35:44.828978 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:35:44.828988 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:35:44.828999 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:35:44.829015 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:35:44.829025 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:35:44.829036 | orchestrator | 2025-09-20 10:35:44.829047 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-20 10:35:44.829057 | orchestrator | Saturday 20 September 2025 10:35:43 +0000 (0:00:00.641) 0:00:45.017 **** 2025-09-20 10:35:44.829068 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:35:44.829078 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:35:44.829089 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:35:44.829099 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:35:44.829110 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:35:44.829120 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:35:44.829131 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:35:44.829141 | orchestrator | 2025-09-20 10:35:44.829152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:35:44.829164 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:35:44.829177 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:35:44.829188 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:35:44.829198 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:35:44.829209 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:35:44.829220 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:35:44.829230 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:35:44.829241 | orchestrator | 2025-09-20 10:35:44.829251 | orchestrator | 2025-09-20 10:35:44.829262 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:35:44.829273 | orchestrator | Saturday 20 September 2025 10:35:44 +0000 (0:00:00.697) 0:00:45.715 **** 2025-09-20 10:35:44.829284 | orchestrator | =============================================================================== 2025-09-20 10:35:44.829301 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.02s 2025-09-20 10:35:44.829312 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.94s 2025-09-20 10:35:44.829323 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.14s 2025-09-20 10:35:44.829333 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.24s 2025-09-20 10:35:44.829344 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.04s 2025-09-20 10:35:44.829354 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.02s 2025-09-20 10:35:44.829365 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2025-09-20 10:35:44.829376 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.84s 2025-09-20 10:35:44.829386 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2025-09-20 10:35:44.829397 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.56s 2025-09-20 10:35:44.829408 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.37s 2025-09-20 10:35:44.829418 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2025-09-20 10:35:44.829429 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2025-09-20 10:35:44.829439 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2025-09-20 10:35:44.829450 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.11s 2025-09-20 10:35:44.829461 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.09s 2025-09-20 10:35:44.829471 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.05s 2025-09-20 10:35:44.829482 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-09-20 10:35:44.829492 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.91s 2025-09-20 10:35:44.829503 | orchestrator | osism.commons.network : Create required directories --------------------- 0.90s 2025-09-20 10:35:45.021477 | orchestrator | + osism apply wireguard 2025-09-20 10:35:56.742382 | orchestrator | 2025-09-20 10:35:56 | INFO  | Task 8cf6fa1f-ff78-413f-8bff-a55ad0d4aea4 (wireguard) was prepared for execution. 2025-09-20 10:35:56.742505 | orchestrator | 2025-09-20 10:35:56 | INFO  | It takes a moment until task 8cf6fa1f-ff78-413f-8bff-a55ad0d4aea4 (wireguard) has been started and output is visible here. 2025-09-20 10:36:15.728231 | orchestrator | 2025-09-20 10:36:15.728356 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-20 10:36:15.728373 | orchestrator | 2025-09-20 10:36:15.728385 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-20 10:36:15.728416 | orchestrator | Saturday 20 September 2025 10:36:00 +0000 (0:00:00.241) 0:00:00.241 **** 2025-09-20 10:36:15.728428 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:15.728441 | orchestrator | 2025-09-20 10:36:15.728452 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-20 10:36:15.728462 | orchestrator | Saturday 20 September 2025 10:36:02 +0000 (0:00:01.607) 0:00:01.849 **** 2025-09-20 10:36:15.728474 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728530 | orchestrator | 2025-09-20 10:36:15.728542 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-20 10:36:15.728553 | orchestrator | Saturday 20 September 2025 10:36:08 +0000 (0:00:05.825) 0:00:07.675 **** 2025-09-20 10:36:15.728564 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728575 | orchestrator | 2025-09-20 10:36:15.728586 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-20 10:36:15.728596 | orchestrator | Saturday 20 September 2025 10:36:08 +0000 (0:00:00.485) 0:00:08.161 **** 2025-09-20 10:36:15.728607 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728644 | orchestrator | 2025-09-20 10:36:15.728655 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-20 10:36:15.728667 | orchestrator | Saturday 20 September 2025 10:36:09 +0000 (0:00:00.425) 0:00:08.587 **** 2025-09-20 10:36:15.728678 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:15.728689 | orchestrator | 2025-09-20 10:36:15.728699 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-20 10:36:15.728710 | orchestrator | Saturday 20 September 2025 10:36:09 +0000 (0:00:00.484) 0:00:09.072 **** 2025-09-20 10:36:15.728721 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:15.728732 | orchestrator | 2025-09-20 10:36:15.728742 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-20 10:36:15.728753 | orchestrator | Saturday 20 September 2025 10:36:10 +0000 (0:00:00.493) 0:00:09.565 **** 2025-09-20 10:36:15.728764 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:15.728776 | orchestrator | 2025-09-20 10:36:15.728788 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-20 10:36:15.728800 | orchestrator | Saturday 20 September 2025 10:36:10 +0000 (0:00:00.411) 0:00:09.977 **** 2025-09-20 10:36:15.728811 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728823 | orchestrator | 2025-09-20 10:36:15.728836 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-20 10:36:15.728848 | orchestrator | Saturday 20 September 2025 10:36:11 +0000 (0:00:01.175) 0:00:11.152 **** 2025-09-20 10:36:15.728860 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 10:36:15.728872 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728884 | orchestrator | 2025-09-20 10:36:15.728896 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-20 10:36:15.728909 | orchestrator | Saturday 20 September 2025 10:36:12 +0000 (0:00:00.972) 0:00:12.125 **** 2025-09-20 10:36:15.728921 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728932 | orchestrator | 2025-09-20 10:36:15.728945 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-20 10:36:15.728957 | orchestrator | Saturday 20 September 2025 10:36:14 +0000 (0:00:01.730) 0:00:13.855 **** 2025-09-20 10:36:15.728969 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:15.728981 | orchestrator | 2025-09-20 10:36:15.728992 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:36:15.729005 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:36:15.729018 | orchestrator | 2025-09-20 10:36:15.729030 | orchestrator | 2025-09-20 10:36:15.729041 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:36:15.729054 | orchestrator | Saturday 20 September 2025 10:36:15 +0000 (0:00:00.929) 0:00:14.785 **** 2025-09-20 10:36:15.729066 | orchestrator | =============================================================================== 2025-09-20 10:36:15.729078 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.83s 2025-09-20 10:36:15.729090 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-09-20 10:36:15.729102 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2025-09-20 10:36:15.729114 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-09-20 10:36:15.729125 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-09-20 10:36:15.729136 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-09-20 10:36:15.729146 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.49s 2025-09-20 10:36:15.729157 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2025-09-20 10:36:15.729168 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2025-09-20 10:36:15.729178 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-09-20 10:36:15.729198 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-20 10:36:16.004836 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-20 10:36:16.048684 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-20 10:36:16.048751 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-20 10:36:16.132544 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 166 0 --:--:-- --:--:-- --:--:-- 166 2025-09-20 10:36:16.150642 | orchestrator | + osism apply --environment custom workarounds 2025-09-20 10:36:18.106954 | orchestrator | 2025-09-20 10:36:18 | INFO  | Trying to run play workarounds in environment custom 2025-09-20 10:36:28.308784 | orchestrator | 2025-09-20 10:36:28 | INFO  | Task 79acfa96-5ef1-45e1-8b6f-553bc7ba6e6a (workarounds) was prepared for execution. 2025-09-20 10:36:28.308903 | orchestrator | 2025-09-20 10:36:28 | INFO  | It takes a moment until task 79acfa96-5ef1-45e1-8b6f-553bc7ba6e6a (workarounds) has been started and output is visible here. 2025-09-20 10:36:52.102569 | orchestrator | 2025-09-20 10:36:52.102687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:36:52.102705 | orchestrator | 2025-09-20 10:36:52.102717 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-20 10:36:52.102729 | orchestrator | Saturday 20 September 2025 10:36:31 +0000 (0:00:00.134) 0:00:00.135 **** 2025-09-20 10:36:52.102741 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102752 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102763 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102773 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102784 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102795 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102805 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-20 10:36:52.102816 | orchestrator | 2025-09-20 10:36:52.102827 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-20 10:36:52.102838 | orchestrator | 2025-09-20 10:36:52.102849 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-20 10:36:52.102859 | orchestrator | Saturday 20 September 2025 10:36:32 +0000 (0:00:00.668) 0:00:00.803 **** 2025-09-20 10:36:52.102871 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:52.102883 | orchestrator | 2025-09-20 10:36:52.102894 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-20 10:36:52.102905 | orchestrator | 2025-09-20 10:36:52.102916 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-20 10:36:52.102927 | orchestrator | Saturday 20 September 2025 10:36:34 +0000 (0:00:02.367) 0:00:03.171 **** 2025-09-20 10:36:52.102938 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:36:52.102949 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:36:52.102959 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:36:52.102970 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:36:52.102981 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:36:52.102992 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:36:52.103002 | orchestrator | 2025-09-20 10:36:52.103014 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-20 10:36:52.103026 | orchestrator | 2025-09-20 10:36:52.103036 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-20 10:36:52.103047 | orchestrator | Saturday 20 September 2025 10:36:36 +0000 (0:00:01.739) 0:00:04.911 **** 2025-09-20 10:36:52.103058 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 10:36:52.103071 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 10:36:52.103102 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 10:36:52.103115 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 10:36:52.103127 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 10:36:52.103139 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 10:36:52.103152 | orchestrator | 2025-09-20 10:36:52.103164 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-20 10:36:52.103177 | orchestrator | Saturday 20 September 2025 10:36:38 +0000 (0:00:01.409) 0:00:06.320 **** 2025-09-20 10:36:52.103189 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:36:52.103202 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:36:52.103214 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:36:52.103226 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:36:52.103238 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:36:52.103250 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:36:52.103262 | orchestrator | 2025-09-20 10:36:52.103274 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-20 10:36:52.103286 | orchestrator | Saturday 20 September 2025 10:36:41 +0000 (0:00:03.590) 0:00:09.910 **** 2025-09-20 10:36:52.103298 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:36:52.103310 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:36:52.103322 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:36:52.103334 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:36:52.103346 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:36:52.103358 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:36:52.103370 | orchestrator | 2025-09-20 10:36:52.103383 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-20 10:36:52.103395 | orchestrator | 2025-09-20 10:36:52.103407 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-20 10:36:52.103419 | orchestrator | Saturday 20 September 2025 10:36:42 +0000 (0:00:00.693) 0:00:10.604 **** 2025-09-20 10:36:52.103451 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:52.103462 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:36:52.103473 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:36:52.103483 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:36:52.103494 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:36:52.103504 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:36:52.103515 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:36:52.103525 | orchestrator | 2025-09-20 10:36:52.103536 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-20 10:36:52.103547 | orchestrator | Saturday 20 September 2025 10:36:44 +0000 (0:00:01.643) 0:00:12.248 **** 2025-09-20 10:36:52.103566 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:52.103577 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:36:52.103587 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:36:52.103598 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:36:52.103608 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:36:52.103619 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:36:52.103645 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:36:52.103657 | orchestrator | 2025-09-20 10:36:52.103668 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-20 10:36:52.103679 | orchestrator | Saturday 20 September 2025 10:36:45 +0000 (0:00:01.656) 0:00:13.904 **** 2025-09-20 10:36:52.103690 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:36:52.103700 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:36:52.103711 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:36:52.103721 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:36:52.103732 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:36:52.103750 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:52.103761 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:36:52.103772 | orchestrator | 2025-09-20 10:36:52.103783 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-20 10:36:52.103794 | orchestrator | Saturday 20 September 2025 10:36:47 +0000 (0:00:01.429) 0:00:15.334 **** 2025-09-20 10:36:52.103804 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:36:52.103815 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:36:52.103826 | orchestrator | changed: [testbed-manager] 2025-09-20 10:36:52.103836 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:36:52.103847 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:36:52.103857 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:36:52.103867 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:36:52.103878 | orchestrator | 2025-09-20 10:36:52.103889 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-20 10:36:52.103899 | orchestrator | Saturday 20 September 2025 10:36:48 +0000 (0:00:01.687) 0:00:17.022 **** 2025-09-20 10:36:52.103910 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:36:52.103920 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:36:52.103931 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:36:52.103941 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:36:52.103952 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:36:52.103962 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:36:52.103972 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:36:52.103983 | orchestrator | 2025-09-20 10:36:52.103994 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-20 10:36:52.104005 | orchestrator | 2025-09-20 10:36:52.104015 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-20 10:36:52.104026 | orchestrator | Saturday 20 September 2025 10:36:49 +0000 (0:00:00.651) 0:00:17.673 **** 2025-09-20 10:36:52.104037 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:36:52.104047 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:36:52.104058 | orchestrator | ok: [testbed-manager] 2025-09-20 10:36:52.104069 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:36:52.104079 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:36:52.104090 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:36:52.104100 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:36:52.104111 | orchestrator | 2025-09-20 10:36:52.104122 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:36:52.104133 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:36:52.104145 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:36:52.104156 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:36:52.104167 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:36:52.104178 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:36:52.104188 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:36:52.104199 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:36:52.104210 | orchestrator | 2025-09-20 10:36:52.104220 | orchestrator | 2025-09-20 10:36:52.104232 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:36:52.104242 | orchestrator | Saturday 20 September 2025 10:36:52 +0000 (0:00:02.615) 0:00:20.289 **** 2025-09-20 10:36:52.104260 | orchestrator | =============================================================================== 2025-09-20 10:36:52.104271 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.59s 2025-09-20 10:36:52.104281 | orchestrator | Install python3-docker -------------------------------------------------- 2.62s 2025-09-20 10:36:52.104292 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2025-09-20 10:36:52.104303 | orchestrator | Apply netplan configuration --------------------------------------------- 1.74s 2025-09-20 10:36:52.104313 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.69s 2025-09-20 10:36:52.104324 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.66s 2025-09-20 10:36:52.104334 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-09-20 10:36:52.104345 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.43s 2025-09-20 10:36:52.104361 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2025-09-20 10:36:52.104372 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-09-20 10:36:52.104383 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.67s 2025-09-20 10:36:52.104400 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2025-09-20 10:36:52.547981 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-20 10:37:04.392703 | orchestrator | 2025-09-20 10:37:04 | INFO  | Task de015417-d589-4a18-b2c2-63eaa4df95d6 (reboot) was prepared for execution. 2025-09-20 10:37:04.392815 | orchestrator | 2025-09-20 10:37:04 | INFO  | It takes a moment until task de015417-d589-4a18-b2c2-63eaa4df95d6 (reboot) has been started and output is visible here. 2025-09-20 10:37:14.371132 | orchestrator | 2025-09-20 10:37:14.371282 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 10:37:14.371314 | orchestrator | 2025-09-20 10:37:14.371337 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 10:37:14.371359 | orchestrator | Saturday 20 September 2025 10:37:08 +0000 (0:00:00.211) 0:00:00.211 **** 2025-09-20 10:37:14.371380 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:37:14.371453 | orchestrator | 2025-09-20 10:37:14.371473 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 10:37:14.371492 | orchestrator | Saturday 20 September 2025 10:37:08 +0000 (0:00:00.113) 0:00:00.325 **** 2025-09-20 10:37:14.371511 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:37:14.371531 | orchestrator | 2025-09-20 10:37:14.371551 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 10:37:14.371572 | orchestrator | Saturday 20 September 2025 10:37:09 +0000 (0:00:00.947) 0:00:01.272 **** 2025-09-20 10:37:14.371593 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:37:14.371613 | orchestrator | 2025-09-20 10:37:14.371634 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 10:37:14.371655 | orchestrator | 2025-09-20 10:37:14.371674 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 10:37:14.371693 | orchestrator | Saturday 20 September 2025 10:37:09 +0000 (0:00:00.115) 0:00:01.388 **** 2025-09-20 10:37:14.371712 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:37:14.371732 | orchestrator | 2025-09-20 10:37:14.371752 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 10:37:14.371780 | orchestrator | Saturday 20 September 2025 10:37:09 +0000 (0:00:00.114) 0:00:01.502 **** 2025-09-20 10:37:14.371810 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:37:14.371840 | orchestrator | 2025-09-20 10:37:14.371870 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 10:37:14.371901 | orchestrator | Saturday 20 September 2025 10:37:10 +0000 (0:00:00.635) 0:00:02.138 **** 2025-09-20 10:37:14.371928 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:37:14.371957 | orchestrator | 2025-09-20 10:37:14.372017 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 10:37:14.372039 | orchestrator | 2025-09-20 10:37:14.372059 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 10:37:14.372079 | orchestrator | Saturday 20 September 2025 10:37:10 +0000 (0:00:00.109) 0:00:02.247 **** 2025-09-20 10:37:14.372099 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:37:14.372118 | orchestrator | 2025-09-20 10:37:14.372137 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 10:37:14.372156 | orchestrator | Saturday 20 September 2025 10:37:10 +0000 (0:00:00.198) 0:00:02.445 **** 2025-09-20 10:37:14.372174 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:37:14.372192 | orchestrator | 2025-09-20 10:37:14.372210 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 10:37:14.372227 | orchestrator | Saturday 20 September 2025 10:37:11 +0000 (0:00:00.667) 0:00:03.113 **** 2025-09-20 10:37:14.372246 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:37:14.372263 | orchestrator | 2025-09-20 10:37:14.372280 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 10:37:14.372297 | orchestrator | 2025-09-20 10:37:14.372315 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 10:37:14.372333 | orchestrator | Saturday 20 September 2025 10:37:11 +0000 (0:00:00.117) 0:00:03.231 **** 2025-09-20 10:37:14.372350 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:37:14.372365 | orchestrator | 2025-09-20 10:37:14.372381 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 10:37:14.372426 | orchestrator | Saturday 20 September 2025 10:37:11 +0000 (0:00:00.125) 0:00:03.357 **** 2025-09-20 10:37:14.372442 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:37:14.372457 | orchestrator | 2025-09-20 10:37:14.372473 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 10:37:14.372489 | orchestrator | Saturday 20 September 2025 10:37:12 +0000 (0:00:00.658) 0:00:04.015 **** 2025-09-20 10:37:14.372504 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:37:14.372519 | orchestrator | 2025-09-20 10:37:14.372537 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 10:37:14.372555 | orchestrator | 2025-09-20 10:37:14.372574 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 10:37:14.372594 | orchestrator | Saturday 20 September 2025 10:37:12 +0000 (0:00:00.106) 0:00:04.122 **** 2025-09-20 10:37:14.372613 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:37:14.372632 | orchestrator | 2025-09-20 10:37:14.372652 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 10:37:14.372673 | orchestrator | Saturday 20 September 2025 10:37:12 +0000 (0:00:00.109) 0:00:04.232 **** 2025-09-20 10:37:14.372692 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:37:14.372712 | orchestrator | 2025-09-20 10:37:14.372733 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 10:37:14.372753 | orchestrator | Saturday 20 September 2025 10:37:13 +0000 (0:00:00.696) 0:00:04.928 **** 2025-09-20 10:37:14.372771 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:37:14.372789 | orchestrator | 2025-09-20 10:37:14.372831 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 10:37:14.372852 | orchestrator | 2025-09-20 10:37:14.372872 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 10:37:14.372892 | orchestrator | Saturday 20 September 2025 10:37:13 +0000 (0:00:00.118) 0:00:05.046 **** 2025-09-20 10:37:14.372911 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:37:14.372930 | orchestrator | 2025-09-20 10:37:14.372947 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 10:37:14.372967 | orchestrator | Saturday 20 September 2025 10:37:13 +0000 (0:00:00.103) 0:00:05.150 **** 2025-09-20 10:37:14.372988 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:37:14.373008 | orchestrator | 2025-09-20 10:37:14.373028 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 10:37:14.373065 | orchestrator | Saturday 20 September 2025 10:37:13 +0000 (0:00:00.662) 0:00:05.813 **** 2025-09-20 10:37:14.373114 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:37:14.373135 | orchestrator | 2025-09-20 10:37:14.373151 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:37:14.373168 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:37:14.373185 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:37:14.373200 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:37:14.373216 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:37:14.373232 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:37:14.373247 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:37:14.373262 | orchestrator | 2025-09-20 10:37:14.373278 | orchestrator | 2025-09-20 10:37:14.373293 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:37:14.373308 | orchestrator | Saturday 20 September 2025 10:37:14 +0000 (0:00:00.036) 0:00:05.850 **** 2025-09-20 10:37:14.373323 | orchestrator | =============================================================================== 2025-09-20 10:37:14.373338 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2025-09-20 10:37:14.373362 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-09-20 10:37:14.373381 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2025-09-20 10:37:14.672501 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-20 10:37:26.780836 | orchestrator | 2025-09-20 10:37:26 | INFO  | Task 6af19dfd-ab90-4912-b984-d6875596cacc (wait-for-connection) was prepared for execution. 2025-09-20 10:37:26.780961 | orchestrator | 2025-09-20 10:37:26 | INFO  | It takes a moment until task 6af19dfd-ab90-4912-b984-d6875596cacc (wait-for-connection) has been started and output is visible here. 2025-09-20 10:37:42.747788 | orchestrator | 2025-09-20 10:37:42.747917 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-20 10:37:42.747944 | orchestrator | 2025-09-20 10:37:42.747963 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-20 10:37:42.747982 | orchestrator | Saturday 20 September 2025 10:37:30 +0000 (0:00:00.243) 0:00:00.243 **** 2025-09-20 10:37:42.748003 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:37:42.748024 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:37:42.748043 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:37:42.748062 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:37:42.748081 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:37:42.748100 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:37:42.748119 | orchestrator | 2025-09-20 10:37:42.748139 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:37:42.748159 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:37:42.748181 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:37:42.748200 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:37:42.748244 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:37:42.748255 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:37:42.748266 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:37:42.748277 | orchestrator | 2025-09-20 10:37:42.748288 | orchestrator | 2025-09-20 10:37:42.748299 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:37:42.748310 | orchestrator | Saturday 20 September 2025 10:37:42 +0000 (0:00:11.562) 0:00:11.806 **** 2025-09-20 10:37:42.748321 | orchestrator | =============================================================================== 2025-09-20 10:37:42.748334 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2025-09-20 10:37:43.043210 | orchestrator | + osism apply hddtemp 2025-09-20 10:37:54.957729 | orchestrator | 2025-09-20 10:37:54 | INFO  | Task a66a6201-54f5-4887-91b6-d8e5455994ad (hddtemp) was prepared for execution. 2025-09-20 10:37:54.957848 | orchestrator | 2025-09-20 10:37:54 | INFO  | It takes a moment until task a66a6201-54f5-4887-91b6-d8e5455994ad (hddtemp) has been started and output is visible here. 2025-09-20 10:38:21.194412 | orchestrator | 2025-09-20 10:38:21.194519 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-20 10:38:21.194533 | orchestrator | 2025-09-20 10:38:21.194561 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-20 10:38:21.194572 | orchestrator | Saturday 20 September 2025 10:37:58 +0000 (0:00:00.238) 0:00:00.238 **** 2025-09-20 10:38:21.194581 | orchestrator | ok: [testbed-manager] 2025-09-20 10:38:21.194591 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:38:21.194600 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:38:21.194608 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:38:21.194617 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:38:21.194626 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:38:21.194634 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:38:21.194643 | orchestrator | 2025-09-20 10:38:21.194652 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-20 10:38:21.194661 | orchestrator | Saturday 20 September 2025 10:37:59 +0000 (0:00:00.595) 0:00:00.834 **** 2025-09-20 10:38:21.194671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:38:21.194682 | orchestrator | 2025-09-20 10:38:21.194691 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-20 10:38:21.194699 | orchestrator | Saturday 20 September 2025 10:38:00 +0000 (0:00:01.104) 0:00:01.938 **** 2025-09-20 10:38:21.194708 | orchestrator | ok: [testbed-manager] 2025-09-20 10:38:21.194717 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:38:21.194725 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:38:21.194734 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:38:21.194742 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:38:21.194750 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:38:21.194759 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:38:21.194767 | orchestrator | 2025-09-20 10:38:21.194776 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-20 10:38:21.194785 | orchestrator | Saturday 20 September 2025 10:38:02 +0000 (0:00:01.895) 0:00:03.834 **** 2025-09-20 10:38:21.194793 | orchestrator | changed: [testbed-manager] 2025-09-20 10:38:21.194803 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:38:21.194811 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:38:21.194820 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:38:21.194828 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:38:21.194858 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:38:21.194867 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:38:21.194875 | orchestrator | 2025-09-20 10:38:21.194884 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-20 10:38:21.194892 | orchestrator | Saturday 20 September 2025 10:38:03 +0000 (0:00:01.026) 0:00:04.860 **** 2025-09-20 10:38:21.194901 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:38:21.194909 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:38:21.194918 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:38:21.194928 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:38:21.194937 | orchestrator | ok: [testbed-manager] 2025-09-20 10:38:21.194946 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:38:21.194956 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:38:21.194965 | orchestrator | 2025-09-20 10:38:21.194975 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-20 10:38:21.194985 | orchestrator | Saturday 20 September 2025 10:38:05 +0000 (0:00:01.912) 0:00:06.773 **** 2025-09-20 10:38:21.194994 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:38:21.195004 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:38:21.195013 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:38:21.195023 | orchestrator | changed: [testbed-manager] 2025-09-20 10:38:21.195033 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:38:21.195042 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:38:21.195051 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:38:21.195060 | orchestrator | 2025-09-20 10:38:21.195070 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-20 10:38:21.195080 | orchestrator | Saturday 20 September 2025 10:38:06 +0000 (0:00:00.693) 0:00:07.467 **** 2025-09-20 10:38:21.195089 | orchestrator | changed: [testbed-manager] 2025-09-20 10:38:21.195099 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:38:21.195108 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:38:21.195117 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:38:21.195126 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:38:21.195136 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:38:21.195146 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:38:21.195155 | orchestrator | 2025-09-20 10:38:21.195165 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-20 10:38:21.195174 | orchestrator | Saturday 20 September 2025 10:38:17 +0000 (0:00:11.544) 0:00:19.011 **** 2025-09-20 10:38:21.195184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:38:21.195194 | orchestrator | 2025-09-20 10:38:21.195204 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-20 10:38:21.195213 | orchestrator | Saturday 20 September 2025 10:38:18 +0000 (0:00:01.282) 0:00:20.293 **** 2025-09-20 10:38:21.195223 | orchestrator | changed: [testbed-manager] 2025-09-20 10:38:21.195237 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:38:21.195247 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:38:21.195257 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:38:21.195266 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:38:21.195276 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:38:21.195285 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:38:21.195293 | orchestrator | 2025-09-20 10:38:21.195302 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:38:21.195310 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:38:21.195350 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:38:21.195361 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:38:21.195377 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:38:21.195386 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:38:21.195394 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:38:21.195403 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:38:21.195411 | orchestrator | 2025-09-20 10:38:21.195420 | orchestrator | 2025-09-20 10:38:21.195428 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:38:21.195437 | orchestrator | Saturday 20 September 2025 10:38:20 +0000 (0:00:01.848) 0:00:22.142 **** 2025-09-20 10:38:21.195445 | orchestrator | =============================================================================== 2025-09-20 10:38:21.195454 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.54s 2025-09-20 10:38:21.195463 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.91s 2025-09-20 10:38:21.195471 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.90s 2025-09-20 10:38:21.195480 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-09-20 10:38:21.195488 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2025-09-20 10:38:21.195497 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.10s 2025-09-20 10:38:21.195505 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.03s 2025-09-20 10:38:21.195514 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2025-09-20 10:38:21.195522 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2025-09-20 10:38:21.484725 | orchestrator | ++ semver latest 7.1.1 2025-09-20 10:38:21.557908 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 10:38:21.557945 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 10:38:21.557955 | orchestrator | + sudo systemctl restart manager.service 2025-09-20 10:38:35.305981 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-20 10:38:35.306162 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-20 10:38:35.306180 | orchestrator | + local max_attempts=60 2025-09-20 10:38:35.306193 | orchestrator | + local name=ceph-ansible 2025-09-20 10:38:35.306204 | orchestrator | + local attempt_num=1 2025-09-20 10:38:35.306754 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:38:35.341024 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:38:35.341120 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:38:35.341135 | orchestrator | + sleep 5 2025-09-20 10:38:40.342581 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:38:40.379163 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:38:40.379244 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:38:40.379257 | orchestrator | + sleep 5 2025-09-20 10:38:45.381777 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:38:45.423565 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:38:45.423706 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:38:45.423726 | orchestrator | + sleep 5 2025-09-20 10:38:50.426137 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:38:50.463085 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:38:50.463174 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:38:50.463189 | orchestrator | + sleep 5 2025-09-20 10:38:55.467166 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:38:55.505533 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:38:55.505605 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:38:55.505639 | orchestrator | + sleep 5 2025-09-20 10:39:00.510435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:00.546927 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:00.546975 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:00.546988 | orchestrator | + sleep 5 2025-09-20 10:39:05.552047 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:05.587783 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:05.587836 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:05.587847 | orchestrator | + sleep 5 2025-09-20 10:39:10.592819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:10.618935 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:10.619008 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:10.619022 | orchestrator | + sleep 5 2025-09-20 10:39:15.622129 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:15.661975 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:15.662076 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:15.662089 | orchestrator | + sleep 5 2025-09-20 10:39:20.665847 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:20.704427 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:20.704523 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:20.704538 | orchestrator | + sleep 5 2025-09-20 10:39:25.709271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:25.754700 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:25.754799 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:25.754813 | orchestrator | + sleep 5 2025-09-20 10:39:30.759557 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:30.794868 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:30.795044 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:30.795061 | orchestrator | + sleep 5 2025-09-20 10:39:35.800529 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:35.841682 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:35.841776 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 10:39:35.841791 | orchestrator | + sleep 5 2025-09-20 10:39:40.847484 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 10:39:40.873132 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:40.873204 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-20 10:39:40.873219 | orchestrator | + local max_attempts=60 2025-09-20 10:39:40.873232 | orchestrator | + local name=kolla-ansible 2025-09-20 10:39:40.873243 | orchestrator | + local attempt_num=1 2025-09-20 10:39:40.873400 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-20 10:39:40.900761 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:40.900813 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-20 10:39:40.900828 | orchestrator | + local max_attempts=60 2025-09-20 10:39:40.900840 | orchestrator | + local name=osism-ansible 2025-09-20 10:39:40.900851 | orchestrator | + local attempt_num=1 2025-09-20 10:39:40.901608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-20 10:39:40.930125 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 10:39:40.930179 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-20 10:39:40.930191 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-20 10:39:41.063329 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-20 10:39:41.213790 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-20 10:39:41.350999 | orchestrator | ARA in osism-ansible already disabled. 2025-09-20 10:39:41.488006 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-20 10:39:41.489452 | orchestrator | + osism apply gather-facts 2025-09-20 10:39:53.365551 | orchestrator | 2025-09-20 10:39:53 | INFO  | Task aa8f6c90-6e3c-4257-ad4f-ace01c70ff6d (gather-facts) was prepared for execution. 2025-09-20 10:39:53.365674 | orchestrator | 2025-09-20 10:39:53 | INFO  | It takes a moment until task aa8f6c90-6e3c-4257-ad4f-ace01c70ff6d (gather-facts) has been started and output is visible here. 2025-09-20 10:40:05.573434 | orchestrator | 2025-09-20 10:40:05.573535 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 10:40:05.573568 | orchestrator | 2025-09-20 10:40:05.573576 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:40:05.573584 | orchestrator | Saturday 20 September 2025 10:39:56 +0000 (0:00:00.202) 0:00:00.202 **** 2025-09-20 10:40:05.573591 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:40:05.573600 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:40:05.573607 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:40:05.573614 | orchestrator | ok: [testbed-manager] 2025-09-20 10:40:05.573622 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:40:05.573629 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:40:05.573636 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:40:05.573643 | orchestrator | 2025-09-20 10:40:05.573650 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 10:40:05.573657 | orchestrator | 2025-09-20 10:40:05.573665 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 10:40:05.573672 | orchestrator | Saturday 20 September 2025 10:40:04 +0000 (0:00:07.876) 0:00:08.078 **** 2025-09-20 10:40:05.573679 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:40:05.573688 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:40:05.573695 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:40:05.573702 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:40:05.573709 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:40:05.573716 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:40:05.573723 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:40:05.573731 | orchestrator | 2025-09-20 10:40:05.573738 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:40:05.573745 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573754 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573761 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573768 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573775 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573782 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573790 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:40:05.573797 | orchestrator | 2025-09-20 10:40:05.573805 | orchestrator | 2025-09-20 10:40:05.573812 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:40:05.573819 | orchestrator | Saturday 20 September 2025 10:40:05 +0000 (0:00:00.441) 0:00:08.520 **** 2025-09-20 10:40:05.573838 | orchestrator | =============================================================================== 2025-09-20 10:40:05.573846 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.88s 2025-09-20 10:40:05.573853 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-09-20 10:40:05.870655 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-20 10:40:05.884765 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-20 10:40:05.903032 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-20 10:40:05.916103 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-20 10:40:05.930563 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-20 10:40:05.946935 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-20 10:40:05.959283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-20 10:40:05.975232 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-20 10:40:05.987768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-20 10:40:06.009757 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-20 10:40:06.032178 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-20 10:40:06.049872 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-20 10:40:06.068658 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-20 10:40:06.085053 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-20 10:40:06.102758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-20 10:40:06.121189 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-20 10:40:06.137496 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-20 10:40:06.152969 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-20 10:40:06.174509 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-20 10:40:06.189246 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-20 10:40:06.210690 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-20 10:40:06.667139 | orchestrator | ok: Runtime: 0:22:29.305433 2025-09-20 10:40:06.771025 | 2025-09-20 10:40:06.771162 | TASK [Deploy services] 2025-09-20 10:40:07.302366 | orchestrator | skipping: Conditional result was False 2025-09-20 10:40:07.319710 | 2025-09-20 10:40:07.319877 | TASK [Deploy in a nutshell] 2025-09-20 10:40:08.009902 | orchestrator | + set -e 2025-09-20 10:40:08.011169 | orchestrator | 2025-09-20 10:40:08.011208 | orchestrator | # PULL IMAGES 2025-09-20 10:40:08.011224 | orchestrator | 2025-09-20 10:40:08.011243 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 10:40:08.011264 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 10:40:08.011279 | orchestrator | ++ INTERACTIVE=false 2025-09-20 10:40:08.011325 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 10:40:08.011348 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 10:40:08.011396 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 10:40:08.011409 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 10:40:08.011428 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 10:40:08.011440 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 10:40:08.011458 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 10:40:08.011470 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 10:40:08.011490 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 10:40:08.011501 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:40:08.011515 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:40:08.011527 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 10:40:08.011539 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 10:40:08.011551 | orchestrator | ++ export ARA=false 2025-09-20 10:40:08.011562 | orchestrator | ++ ARA=false 2025-09-20 10:40:08.011573 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 10:40:08.011585 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 10:40:08.011596 | orchestrator | ++ export TEMPEST=false 2025-09-20 10:40:08.011607 | orchestrator | ++ TEMPEST=false 2025-09-20 10:40:08.011618 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 10:40:08.011629 | orchestrator | ++ IS_ZUUL=true 2025-09-20 10:40:08.011640 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:40:08.011652 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 10:40:08.011663 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 10:40:08.011674 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 10:40:08.011685 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 10:40:08.011697 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 10:40:08.011708 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 10:40:08.011719 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 10:40:08.011730 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 10:40:08.011748 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 10:40:08.011760 | orchestrator | + echo 2025-09-20 10:40:08.011771 | orchestrator | + echo '# PULL IMAGES' 2025-09-20 10:40:08.011783 | orchestrator | + echo 2025-09-20 10:40:08.012767 | orchestrator | ++ semver latest 7.0.0 2025-09-20 10:40:08.072210 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 10:40:08.072239 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 10:40:08.072251 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-20 10:40:09.923296 | orchestrator | 2025-09-20 10:40:09 | INFO  | Trying to run play pull-images in environment custom 2025-09-20 10:40:20.147740 | orchestrator | 2025-09-20 10:40:20 | INFO  | Task 3b3cdfe9-159d-4ba8-82ea-e1db208ee4bc (pull-images) was prepared for execution. 2025-09-20 10:40:20.147854 | orchestrator | 2025-09-20 10:40:20 | INFO  | Task 3b3cdfe9-159d-4ba8-82ea-e1db208ee4bc is running in background. No more output. Check ARA for logs. 2025-09-20 10:40:22.266671 | orchestrator | 2025-09-20 10:40:22 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-20 10:40:32.394771 | orchestrator | 2025-09-20 10:40:32 | INFO  | Task 50aab533-b1c4-4256-86d8-183c3554b858 (wipe-partitions) was prepared for execution. 2025-09-20 10:40:32.394924 | orchestrator | 2025-09-20 10:40:32 | INFO  | It takes a moment until task 50aab533-b1c4-4256-86d8-183c3554b858 (wipe-partitions) has been started and output is visible here. 2025-09-20 10:40:44.453584 | orchestrator | 2025-09-20 10:40:44.453708 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-20 10:40:44.453726 | orchestrator | 2025-09-20 10:40:44.453738 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-20 10:40:44.453756 | orchestrator | Saturday 20 September 2025 10:40:35 +0000 (0:00:00.120) 0:00:00.120 **** 2025-09-20 10:40:44.453770 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:40:44.453782 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:40:44.453793 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:40:44.453805 | orchestrator | 2025-09-20 10:40:44.453817 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-20 10:40:44.453852 | orchestrator | Saturday 20 September 2025 10:40:36 +0000 (0:00:00.526) 0:00:00.646 **** 2025-09-20 10:40:44.453864 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:40:44.453893 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:40:44.453909 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:40:44.453920 | orchestrator | 2025-09-20 10:40:44.453931 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-20 10:40:44.453942 | orchestrator | Saturday 20 September 2025 10:40:36 +0000 (0:00:00.232) 0:00:00.879 **** 2025-09-20 10:40:44.453953 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:40:44.453965 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:40:44.453976 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:40:44.453987 | orchestrator | 2025-09-20 10:40:44.453998 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-20 10:40:44.454009 | orchestrator | Saturday 20 September 2025 10:40:37 +0000 (0:00:00.614) 0:00:01.494 **** 2025-09-20 10:40:44.454074 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:40:44.454088 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:40:44.454099 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:40:44.454111 | orchestrator | 2025-09-20 10:40:44.454123 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-20 10:40:44.454135 | orchestrator | Saturday 20 September 2025 10:40:37 +0000 (0:00:00.226) 0:00:01.720 **** 2025-09-20 10:40:44.454148 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-20 10:40:44.454164 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-20 10:40:44.454176 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-20 10:40:44.454188 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-20 10:40:44.454200 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-20 10:40:44.454212 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-20 10:40:44.454224 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-20 10:40:44.454236 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-20 10:40:44.454248 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-20 10:40:44.454260 | orchestrator | 2025-09-20 10:40:44.454272 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-20 10:40:44.454285 | orchestrator | Saturday 20 September 2025 10:40:39 +0000 (0:00:02.008) 0:00:03.729 **** 2025-09-20 10:40:44.454297 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-20 10:40:44.454309 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-20 10:40:44.454321 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-20 10:40:44.454333 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-20 10:40:44.454345 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-20 10:40:44.454357 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-20 10:40:44.454399 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-20 10:40:44.454418 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-20 10:40:44.454437 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-20 10:40:44.454456 | orchestrator | 2025-09-20 10:40:44.454473 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-20 10:40:44.454491 | orchestrator | Saturday 20 September 2025 10:40:40 +0000 (0:00:01.332) 0:00:05.062 **** 2025-09-20 10:40:44.454509 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-20 10:40:44.454526 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-20 10:40:44.454543 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-20 10:40:44.454561 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-20 10:40:44.454577 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-20 10:40:44.454605 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-20 10:40:44.454625 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-20 10:40:44.454655 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-20 10:40:44.454667 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-20 10:40:44.454677 | orchestrator | 2025-09-20 10:40:44.454688 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-20 10:40:44.454699 | orchestrator | Saturday 20 September 2025 10:40:43 +0000 (0:00:02.127) 0:00:07.189 **** 2025-09-20 10:40:44.454710 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:40:44.454721 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:40:44.454732 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:40:44.454743 | orchestrator | 2025-09-20 10:40:44.454753 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-20 10:40:44.454764 | orchestrator | Saturday 20 September 2025 10:40:43 +0000 (0:00:00.593) 0:00:07.783 **** 2025-09-20 10:40:44.454775 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:40:44.454786 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:40:44.454797 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:40:44.454808 | orchestrator | 2025-09-20 10:40:44.454819 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:40:44.454832 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:40:44.454845 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:40:44.454877 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:40:44.454888 | orchestrator | 2025-09-20 10:40:44.454900 | orchestrator | 2025-09-20 10:40:44.454911 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:40:44.454922 | orchestrator | Saturday 20 September 2025 10:40:44 +0000 (0:00:00.579) 0:00:08.362 **** 2025-09-20 10:40:44.454933 | orchestrator | =============================================================================== 2025-09-20 10:40:44.454944 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-09-20 10:40:44.454955 | orchestrator | Check device availability ----------------------------------------------- 2.01s 2025-09-20 10:40:44.454966 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-09-20 10:40:44.454976 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2025-09-20 10:40:44.454987 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-09-20 10:40:44.454998 | orchestrator | Request device events from the kernel ----------------------------------- 0.58s 2025-09-20 10:40:44.455009 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.53s 2025-09-20 10:40:44.455020 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-20 10:40:44.455031 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-09-20 10:40:56.467937 | orchestrator | 2025-09-20 10:40:56 | INFO  | Task 159f96df-0253-4957-b250-61097cdbfce5 (facts) was prepared for execution. 2025-09-20 10:40:56.468080 | orchestrator | 2025-09-20 10:40:56 | INFO  | It takes a moment until task 159f96df-0253-4957-b250-61097cdbfce5 (facts) has been started and output is visible here. 2025-09-20 10:41:09.439984 | orchestrator | 2025-09-20 10:41:09.440106 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-20 10:41:09.440124 | orchestrator | 2025-09-20 10:41:09.440136 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 10:41:09.440148 | orchestrator | Saturday 20 September 2025 10:41:00 +0000 (0:00:00.269) 0:00:00.269 **** 2025-09-20 10:41:09.440160 | orchestrator | ok: [testbed-manager] 2025-09-20 10:41:09.440172 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:41:09.440183 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:41:09.440217 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:41:09.440228 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:41:09.440238 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:41:09.440249 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:41:09.440260 | orchestrator | 2025-09-20 10:41:09.440274 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 10:41:09.440285 | orchestrator | Saturday 20 September 2025 10:41:01 +0000 (0:00:01.058) 0:00:01.328 **** 2025-09-20 10:41:09.440296 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:41:09.440308 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:41:09.440319 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:41:09.440329 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:41:09.440340 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:09.440350 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:09.440361 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:09.440421 | orchestrator | 2025-09-20 10:41:09.440433 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 10:41:09.440444 | orchestrator | 2025-09-20 10:41:09.440456 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:41:09.440467 | orchestrator | Saturday 20 September 2025 10:41:02 +0000 (0:00:01.255) 0:00:02.584 **** 2025-09-20 10:41:09.440478 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:41:09.440489 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:41:09.440501 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:41:09.440512 | orchestrator | ok: [testbed-manager] 2025-09-20 10:41:09.440525 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:41:09.440537 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:41:09.440550 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:41:09.440563 | orchestrator | 2025-09-20 10:41:09.440575 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 10:41:09.440586 | orchestrator | 2025-09-20 10:41:09.440597 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 10:41:09.440627 | orchestrator | Saturday 20 September 2025 10:41:08 +0000 (0:00:05.473) 0:00:08.058 **** 2025-09-20 10:41:09.440639 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:41:09.440650 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:41:09.440661 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:41:09.440672 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:41:09.440683 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:09.440694 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:09.440704 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:09.440715 | orchestrator | 2025-09-20 10:41:09.440726 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:41:09.440738 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440750 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440761 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440772 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440783 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440795 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440806 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:41:09.440817 | orchestrator | 2025-09-20 10:41:09.440837 | orchestrator | 2025-09-20 10:41:09.440848 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:41:09.440859 | orchestrator | Saturday 20 September 2025 10:41:09 +0000 (0:00:00.719) 0:00:08.777 **** 2025-09-20 10:41:09.440870 | orchestrator | =============================================================================== 2025-09-20 10:41:09.440881 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.47s 2025-09-20 10:41:09.440891 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-09-20 10:41:09.440902 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-09-20 10:41:09.440913 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-09-20 10:41:11.743690 | orchestrator | 2025-09-20 10:41:11 | INFO  | Task e7b97b36-0bde-4d8f-93ae-782fc7b23493 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-20 10:41:11.743791 | orchestrator | 2025-09-20 10:41:11 | INFO  | It takes a moment until task e7b97b36-0bde-4d8f-93ae-782fc7b23493 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-20 10:41:22.933762 | orchestrator | 2025-09-20 10:41:22.933882 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-20 10:41:22.933899 | orchestrator | 2025-09-20 10:41:22.933911 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:41:22.933925 | orchestrator | Saturday 20 September 2025 10:41:15 +0000 (0:00:00.336) 0:00:00.336 **** 2025-09-20 10:41:22.933937 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:41:22.933949 | orchestrator | 2025-09-20 10:41:22.933960 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 10:41:22.933971 | orchestrator | Saturday 20 September 2025 10:41:16 +0000 (0:00:00.245) 0:00:00.582 **** 2025-09-20 10:41:22.933983 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:41:22.933995 | orchestrator | 2025-09-20 10:41:22.934006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934074 | orchestrator | Saturday 20 September 2025 10:41:16 +0000 (0:00:00.226) 0:00:00.808 **** 2025-09-20 10:41:22.934089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-20 10:41:22.934100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-20 10:41:22.934112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-20 10:41:22.934123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-20 10:41:22.934134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-20 10:41:22.934145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-20 10:41:22.934155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-20 10:41:22.934166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-20 10:41:22.934177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-20 10:41:22.934188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-20 10:41:22.934199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-20 10:41:22.934219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-20 10:41:22.934230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-20 10:41:22.934241 | orchestrator | 2025-09-20 10:41:22.934252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934263 | orchestrator | Saturday 20 September 2025 10:41:16 +0000 (0:00:00.373) 0:00:01.182 **** 2025-09-20 10:41:22.934274 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934308 | orchestrator | 2025-09-20 10:41:22.934321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934333 | orchestrator | Saturday 20 September 2025 10:41:17 +0000 (0:00:00.487) 0:00:01.670 **** 2025-09-20 10:41:22.934345 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934357 | orchestrator | 2025-09-20 10:41:22.934369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934455 | orchestrator | Saturday 20 September 2025 10:41:17 +0000 (0:00:00.197) 0:00:01.867 **** 2025-09-20 10:41:22.934468 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934480 | orchestrator | 2025-09-20 10:41:22.934493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934505 | orchestrator | Saturday 20 September 2025 10:41:17 +0000 (0:00:00.197) 0:00:02.065 **** 2025-09-20 10:41:22.934517 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934533 | orchestrator | 2025-09-20 10:41:22.934545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934557 | orchestrator | Saturday 20 September 2025 10:41:17 +0000 (0:00:00.202) 0:00:02.267 **** 2025-09-20 10:41:22.934569 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934581 | orchestrator | 2025-09-20 10:41:22.934594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934605 | orchestrator | Saturday 20 September 2025 10:41:18 +0000 (0:00:00.199) 0:00:02.466 **** 2025-09-20 10:41:22.934616 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934626 | orchestrator | 2025-09-20 10:41:22.934637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934648 | orchestrator | Saturday 20 September 2025 10:41:18 +0000 (0:00:00.193) 0:00:02.660 **** 2025-09-20 10:41:22.934658 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934669 | orchestrator | 2025-09-20 10:41:22.934680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934691 | orchestrator | Saturday 20 September 2025 10:41:18 +0000 (0:00:00.221) 0:00:02.881 **** 2025-09-20 10:41:22.934702 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.934712 | orchestrator | 2025-09-20 10:41:22.934723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934734 | orchestrator | Saturday 20 September 2025 10:41:18 +0000 (0:00:00.188) 0:00:03.069 **** 2025-09-20 10:41:22.934745 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d) 2025-09-20 10:41:22.934757 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d) 2025-09-20 10:41:22.934768 | orchestrator | 2025-09-20 10:41:22.934779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934790 | orchestrator | Saturday 20 September 2025 10:41:19 +0000 (0:00:00.374) 0:00:03.444 **** 2025-09-20 10:41:22.934821 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf) 2025-09-20 10:41:22.934833 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf) 2025-09-20 10:41:22.934844 | orchestrator | 2025-09-20 10:41:22.934855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934865 | orchestrator | Saturday 20 September 2025 10:41:19 +0000 (0:00:00.398) 0:00:03.842 **** 2025-09-20 10:41:22.934876 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949) 2025-09-20 10:41:22.934887 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949) 2025-09-20 10:41:22.934898 | orchestrator | 2025-09-20 10:41:22.934909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934919 | orchestrator | Saturday 20 September 2025 10:41:19 +0000 (0:00:00.507) 0:00:04.349 **** 2025-09-20 10:41:22.934930 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f) 2025-09-20 10:41:22.934950 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f) 2025-09-20 10:41:22.934960 | orchestrator | 2025-09-20 10:41:22.934971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:22.934982 | orchestrator | Saturday 20 September 2025 10:41:20 +0000 (0:00:00.516) 0:00:04.865 **** 2025-09-20 10:41:22.934992 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 10:41:22.935003 | orchestrator | 2025-09-20 10:41:22.935014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935030 | orchestrator | Saturday 20 September 2025 10:41:21 +0000 (0:00:00.569) 0:00:05.435 **** 2025-09-20 10:41:22.935041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-20 10:41:22.935052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-20 10:41:22.935062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-20 10:41:22.935073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-20 10:41:22.935083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-20 10:41:22.935094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-20 10:41:22.935104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-20 10:41:22.935115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-20 10:41:22.935125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-20 10:41:22.935136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-20 10:41:22.935146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-20 10:41:22.935157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-20 10:41:22.935167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-20 10:41:22.935178 | orchestrator | 2025-09-20 10:41:22.935189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935199 | orchestrator | Saturday 20 September 2025 10:41:21 +0000 (0:00:00.358) 0:00:05.794 **** 2025-09-20 10:41:22.935210 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935220 | orchestrator | 2025-09-20 10:41:22.935231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935242 | orchestrator | Saturday 20 September 2025 10:41:21 +0000 (0:00:00.182) 0:00:05.976 **** 2025-09-20 10:41:22.935252 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935263 | orchestrator | 2025-09-20 10:41:22.935274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935284 | orchestrator | Saturday 20 September 2025 10:41:21 +0000 (0:00:00.192) 0:00:06.169 **** 2025-09-20 10:41:22.935295 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935306 | orchestrator | 2025-09-20 10:41:22.935316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935327 | orchestrator | Saturday 20 September 2025 10:41:21 +0000 (0:00:00.195) 0:00:06.364 **** 2025-09-20 10:41:22.935338 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935348 | orchestrator | 2025-09-20 10:41:22.935359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935390 | orchestrator | Saturday 20 September 2025 10:41:22 +0000 (0:00:00.207) 0:00:06.572 **** 2025-09-20 10:41:22.935401 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935412 | orchestrator | 2025-09-20 10:41:22.935430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935441 | orchestrator | Saturday 20 September 2025 10:41:22 +0000 (0:00:00.191) 0:00:06.763 **** 2025-09-20 10:41:22.935451 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935462 | orchestrator | 2025-09-20 10:41:22.935473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935483 | orchestrator | Saturday 20 September 2025 10:41:22 +0000 (0:00:00.162) 0:00:06.926 **** 2025-09-20 10:41:22.935494 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:22.935505 | orchestrator | 2025-09-20 10:41:22.935515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:22.935526 | orchestrator | Saturday 20 September 2025 10:41:22 +0000 (0:00:00.187) 0:00:07.114 **** 2025-09-20 10:41:22.935544 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658075 | orchestrator | 2025-09-20 10:41:29.658185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:29.658202 | orchestrator | Saturday 20 September 2025 10:41:22 +0000 (0:00:00.202) 0:00:07.317 **** 2025-09-20 10:41:29.658214 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-20 10:41:29.658228 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-20 10:41:29.658240 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-20 10:41:29.658251 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-20 10:41:29.658262 | orchestrator | 2025-09-20 10:41:29.658274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:29.658285 | orchestrator | Saturday 20 September 2025 10:41:23 +0000 (0:00:00.826) 0:00:08.144 **** 2025-09-20 10:41:29.658296 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658307 | orchestrator | 2025-09-20 10:41:29.658318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:29.658329 | orchestrator | Saturday 20 September 2025 10:41:23 +0000 (0:00:00.211) 0:00:08.355 **** 2025-09-20 10:41:29.658340 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658351 | orchestrator | 2025-09-20 10:41:29.658363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:29.658401 | orchestrator | Saturday 20 September 2025 10:41:24 +0000 (0:00:00.180) 0:00:08.536 **** 2025-09-20 10:41:29.658413 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658424 | orchestrator | 2025-09-20 10:41:29.658435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:29.658446 | orchestrator | Saturday 20 September 2025 10:41:24 +0000 (0:00:00.196) 0:00:08.733 **** 2025-09-20 10:41:29.658457 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658468 | orchestrator | 2025-09-20 10:41:29.658479 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-20 10:41:29.658490 | orchestrator | Saturday 20 September 2025 10:41:24 +0000 (0:00:00.190) 0:00:08.923 **** 2025-09-20 10:41:29.658501 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-20 10:41:29.658512 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-20 10:41:29.658523 | orchestrator | 2025-09-20 10:41:29.658534 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-20 10:41:29.658545 | orchestrator | Saturday 20 September 2025 10:41:24 +0000 (0:00:00.163) 0:00:09.087 **** 2025-09-20 10:41:29.658575 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658589 | orchestrator | 2025-09-20 10:41:29.658602 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-20 10:41:29.658615 | orchestrator | Saturday 20 September 2025 10:41:24 +0000 (0:00:00.140) 0:00:09.228 **** 2025-09-20 10:41:29.658627 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658638 | orchestrator | 2025-09-20 10:41:29.658651 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-20 10:41:29.658663 | orchestrator | Saturday 20 September 2025 10:41:24 +0000 (0:00:00.135) 0:00:09.364 **** 2025-09-20 10:41:29.658676 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658710 | orchestrator | 2025-09-20 10:41:29.658723 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-20 10:41:29.658735 | orchestrator | Saturday 20 September 2025 10:41:25 +0000 (0:00:00.164) 0:00:09.529 **** 2025-09-20 10:41:29.658747 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:41:29.658760 | orchestrator | 2025-09-20 10:41:29.658772 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-20 10:41:29.658785 | orchestrator | Saturday 20 September 2025 10:41:25 +0000 (0:00:00.130) 0:00:09.659 **** 2025-09-20 10:41:29.658798 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bfbaad6-401f-511d-91f2-acbf67028504'}}) 2025-09-20 10:41:29.658811 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44b8c0b1-de10-587f-a252-374190a68e04'}}) 2025-09-20 10:41:29.658823 | orchestrator | 2025-09-20 10:41:29.658835 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-20 10:41:29.658847 | orchestrator | Saturday 20 September 2025 10:41:25 +0000 (0:00:00.166) 0:00:09.825 **** 2025-09-20 10:41:29.658860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bfbaad6-401f-511d-91f2-acbf67028504'}})  2025-09-20 10:41:29.658881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44b8c0b1-de10-587f-a252-374190a68e04'}})  2025-09-20 10:41:29.658894 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658906 | orchestrator | 2025-09-20 10:41:29.658918 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-20 10:41:29.658930 | orchestrator | Saturday 20 September 2025 10:41:25 +0000 (0:00:00.144) 0:00:09.970 **** 2025-09-20 10:41:29.658941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bfbaad6-401f-511d-91f2-acbf67028504'}})  2025-09-20 10:41:29.658952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44b8c0b1-de10-587f-a252-374190a68e04'}})  2025-09-20 10:41:29.658963 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.658973 | orchestrator | 2025-09-20 10:41:29.658984 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-20 10:41:29.658995 | orchestrator | Saturday 20 September 2025 10:41:25 +0000 (0:00:00.284) 0:00:10.254 **** 2025-09-20 10:41:29.659006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bfbaad6-401f-511d-91f2-acbf67028504'}})  2025-09-20 10:41:29.659017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44b8c0b1-de10-587f-a252-374190a68e04'}})  2025-09-20 10:41:29.659028 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659039 | orchestrator | 2025-09-20 10:41:29.659069 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-20 10:41:29.659081 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.146) 0:00:10.401 **** 2025-09-20 10:41:29.659092 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:41:29.659103 | orchestrator | 2025-09-20 10:41:29.659120 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-20 10:41:29.659131 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.137) 0:00:10.539 **** 2025-09-20 10:41:29.659142 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:41:29.659153 | orchestrator | 2025-09-20 10:41:29.659164 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-20 10:41:29.659176 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.130) 0:00:10.669 **** 2025-09-20 10:41:29.659186 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659197 | orchestrator | 2025-09-20 10:41:29.659208 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-20 10:41:29.659219 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.114) 0:00:10.784 **** 2025-09-20 10:41:29.659230 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659241 | orchestrator | 2025-09-20 10:41:29.659260 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-20 10:41:29.659271 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.130) 0:00:10.914 **** 2025-09-20 10:41:29.659282 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659293 | orchestrator | 2025-09-20 10:41:29.659304 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-20 10:41:29.659315 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.121) 0:00:11.036 **** 2025-09-20 10:41:29.659326 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 10:41:29.659337 | orchestrator |  "ceph_osd_devices": { 2025-09-20 10:41:29.659348 | orchestrator |  "sdb": { 2025-09-20 10:41:29.659360 | orchestrator |  "osd_lvm_uuid": "8bfbaad6-401f-511d-91f2-acbf67028504" 2025-09-20 10:41:29.659390 | orchestrator |  }, 2025-09-20 10:41:29.659402 | orchestrator |  "sdc": { 2025-09-20 10:41:29.659413 | orchestrator |  "osd_lvm_uuid": "44b8c0b1-de10-587f-a252-374190a68e04" 2025-09-20 10:41:29.659424 | orchestrator |  } 2025-09-20 10:41:29.659435 | orchestrator |  } 2025-09-20 10:41:29.659446 | orchestrator | } 2025-09-20 10:41:29.659458 | orchestrator | 2025-09-20 10:41:29.659469 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-20 10:41:29.659480 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.129) 0:00:11.165 **** 2025-09-20 10:41:29.659490 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659501 | orchestrator | 2025-09-20 10:41:29.659512 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-20 10:41:29.659523 | orchestrator | Saturday 20 September 2025 10:41:26 +0000 (0:00:00.137) 0:00:11.302 **** 2025-09-20 10:41:29.659534 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659545 | orchestrator | 2025-09-20 10:41:29.659556 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-20 10:41:29.659567 | orchestrator | Saturday 20 September 2025 10:41:27 +0000 (0:00:00.124) 0:00:11.426 **** 2025-09-20 10:41:29.659578 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:41:29.659589 | orchestrator | 2025-09-20 10:41:29.659600 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-20 10:41:29.659611 | orchestrator | Saturday 20 September 2025 10:41:27 +0000 (0:00:00.114) 0:00:11.540 **** 2025-09-20 10:41:29.659621 | orchestrator | changed: [testbed-node-3] => { 2025-09-20 10:41:29.659633 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-20 10:41:29.659644 | orchestrator |  "ceph_osd_devices": { 2025-09-20 10:41:29.659655 | orchestrator |  "sdb": { 2025-09-20 10:41:29.659666 | orchestrator |  "osd_lvm_uuid": "8bfbaad6-401f-511d-91f2-acbf67028504" 2025-09-20 10:41:29.659677 | orchestrator |  }, 2025-09-20 10:41:29.659688 | orchestrator |  "sdc": { 2025-09-20 10:41:29.659699 | orchestrator |  "osd_lvm_uuid": "44b8c0b1-de10-587f-a252-374190a68e04" 2025-09-20 10:41:29.659710 | orchestrator |  } 2025-09-20 10:41:29.659721 | orchestrator |  }, 2025-09-20 10:41:29.659732 | orchestrator |  "lvm_volumes": [ 2025-09-20 10:41:29.659743 | orchestrator |  { 2025-09-20 10:41:29.659754 | orchestrator |  "data": "osd-block-8bfbaad6-401f-511d-91f2-acbf67028504", 2025-09-20 10:41:29.659765 | orchestrator |  "data_vg": "ceph-8bfbaad6-401f-511d-91f2-acbf67028504" 2025-09-20 10:41:29.659776 | orchestrator |  }, 2025-09-20 10:41:29.659787 | orchestrator |  { 2025-09-20 10:41:29.659798 | orchestrator |  "data": "osd-block-44b8c0b1-de10-587f-a252-374190a68e04", 2025-09-20 10:41:29.659809 | orchestrator |  "data_vg": "ceph-44b8c0b1-de10-587f-a252-374190a68e04" 2025-09-20 10:41:29.659820 | orchestrator |  } 2025-09-20 10:41:29.659831 | orchestrator |  ] 2025-09-20 10:41:29.659842 | orchestrator |  } 2025-09-20 10:41:29.659853 | orchestrator | } 2025-09-20 10:41:29.659864 | orchestrator | 2025-09-20 10:41:29.659881 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-20 10:41:29.659907 | orchestrator | Saturday 20 September 2025 10:41:27 +0000 (0:00:00.183) 0:00:11.724 **** 2025-09-20 10:41:29.659918 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:41:29.659929 | orchestrator | 2025-09-20 10:41:29.659940 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-20 10:41:29.659951 | orchestrator | 2025-09-20 10:41:29.659962 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:41:29.659973 | orchestrator | Saturday 20 September 2025 10:41:29 +0000 (0:00:01.884) 0:00:13.609 **** 2025-09-20 10:41:29.659984 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-20 10:41:29.659995 | orchestrator | 2025-09-20 10:41:29.660006 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 10:41:29.660016 | orchestrator | Saturday 20 September 2025 10:41:29 +0000 (0:00:00.221) 0:00:13.830 **** 2025-09-20 10:41:29.660027 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:41:29.660038 | orchestrator | 2025-09-20 10:41:29.660049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:29.660067 | orchestrator | Saturday 20 September 2025 10:41:29 +0000 (0:00:00.211) 0:00:14.041 **** 2025-09-20 10:41:36.378921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-20 10:41:36.379051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-20 10:41:36.379069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-20 10:41:36.379081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-20 10:41:36.379093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-20 10:41:36.379104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-20 10:41:36.379115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-20 10:41:36.379126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-20 10:41:36.379137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-20 10:41:36.379148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-20 10:41:36.379159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-20 10:41:36.379170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-20 10:41:36.379181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-20 10:41:36.379197 | orchestrator | 2025-09-20 10:41:36.379210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379222 | orchestrator | Saturday 20 September 2025 10:41:30 +0000 (0:00:00.371) 0:00:14.413 **** 2025-09-20 10:41:36.379234 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379246 | orchestrator | 2025-09-20 10:41:36.379258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379269 | orchestrator | Saturday 20 September 2025 10:41:30 +0000 (0:00:00.180) 0:00:14.593 **** 2025-09-20 10:41:36.379280 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379291 | orchestrator | 2025-09-20 10:41:36.379302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379313 | orchestrator | Saturday 20 September 2025 10:41:30 +0000 (0:00:00.188) 0:00:14.782 **** 2025-09-20 10:41:36.379324 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379335 | orchestrator | 2025-09-20 10:41:36.379347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379358 | orchestrator | Saturday 20 September 2025 10:41:30 +0000 (0:00:00.190) 0:00:14.972 **** 2025-09-20 10:41:36.379369 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379430 | orchestrator | 2025-09-20 10:41:36.379443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379456 | orchestrator | Saturday 20 September 2025 10:41:30 +0000 (0:00:00.160) 0:00:15.133 **** 2025-09-20 10:41:36.379468 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379480 | orchestrator | 2025-09-20 10:41:36.379493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379505 | orchestrator | Saturday 20 September 2025 10:41:31 +0000 (0:00:00.466) 0:00:15.599 **** 2025-09-20 10:41:36.379517 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379529 | orchestrator | 2025-09-20 10:41:36.379541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379553 | orchestrator | Saturday 20 September 2025 10:41:31 +0000 (0:00:00.169) 0:00:15.769 **** 2025-09-20 10:41:36.379584 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379597 | orchestrator | 2025-09-20 10:41:36.379610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379622 | orchestrator | Saturday 20 September 2025 10:41:31 +0000 (0:00:00.184) 0:00:15.953 **** 2025-09-20 10:41:36.379634 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.379646 | orchestrator | 2025-09-20 10:41:36.379658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379670 | orchestrator | Saturday 20 September 2025 10:41:31 +0000 (0:00:00.190) 0:00:16.144 **** 2025-09-20 10:41:36.379682 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99) 2025-09-20 10:41:36.379696 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99) 2025-09-20 10:41:36.379708 | orchestrator | 2025-09-20 10:41:36.379720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379732 | orchestrator | Saturday 20 September 2025 10:41:32 +0000 (0:00:00.376) 0:00:16.520 **** 2025-09-20 10:41:36.379744 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652) 2025-09-20 10:41:36.379756 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652) 2025-09-20 10:41:36.379768 | orchestrator | 2025-09-20 10:41:36.379780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379792 | orchestrator | Saturday 20 September 2025 10:41:32 +0000 (0:00:00.369) 0:00:16.890 **** 2025-09-20 10:41:36.379804 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58) 2025-09-20 10:41:36.379815 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58) 2025-09-20 10:41:36.379826 | orchestrator | 2025-09-20 10:41:36.379837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379848 | orchestrator | Saturday 20 September 2025 10:41:32 +0000 (0:00:00.376) 0:00:17.266 **** 2025-09-20 10:41:36.379879 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22) 2025-09-20 10:41:36.379891 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22) 2025-09-20 10:41:36.379902 | orchestrator | 2025-09-20 10:41:36.379913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:36.379925 | orchestrator | Saturday 20 September 2025 10:41:33 +0000 (0:00:00.381) 0:00:17.647 **** 2025-09-20 10:41:36.379935 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 10:41:36.379946 | orchestrator | 2025-09-20 10:41:36.379957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.379968 | orchestrator | Saturday 20 September 2025 10:41:33 +0000 (0:00:00.297) 0:00:17.945 **** 2025-09-20 10:41:36.379979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-20 10:41:36.380000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-20 10:41:36.380011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-20 10:41:36.380022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-20 10:41:36.380032 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-20 10:41:36.380043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-20 10:41:36.380054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-20 10:41:36.380064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-20 10:41:36.380075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-20 10:41:36.380086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-20 10:41:36.380096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-20 10:41:36.380107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-20 10:41:36.380117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-20 10:41:36.380128 | orchestrator | 2025-09-20 10:41:36.380139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380150 | orchestrator | Saturday 20 September 2025 10:41:33 +0000 (0:00:00.305) 0:00:18.251 **** 2025-09-20 10:41:36.380161 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380172 | orchestrator | 2025-09-20 10:41:36.380183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380193 | orchestrator | Saturday 20 September 2025 10:41:34 +0000 (0:00:00.161) 0:00:18.412 **** 2025-09-20 10:41:36.380204 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380215 | orchestrator | 2025-09-20 10:41:36.380233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380244 | orchestrator | Saturday 20 September 2025 10:41:34 +0000 (0:00:00.446) 0:00:18.858 **** 2025-09-20 10:41:36.380255 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380266 | orchestrator | 2025-09-20 10:41:36.380277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380288 | orchestrator | Saturday 20 September 2025 10:41:34 +0000 (0:00:00.201) 0:00:19.060 **** 2025-09-20 10:41:36.380299 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380310 | orchestrator | 2025-09-20 10:41:36.380321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380332 | orchestrator | Saturday 20 September 2025 10:41:34 +0000 (0:00:00.169) 0:00:19.229 **** 2025-09-20 10:41:36.380343 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380354 | orchestrator | 2025-09-20 10:41:36.380365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380391 | orchestrator | Saturday 20 September 2025 10:41:35 +0000 (0:00:00.173) 0:00:19.402 **** 2025-09-20 10:41:36.380403 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380413 | orchestrator | 2025-09-20 10:41:36.380424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380435 | orchestrator | Saturday 20 September 2025 10:41:35 +0000 (0:00:00.168) 0:00:19.570 **** 2025-09-20 10:41:36.380446 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380457 | orchestrator | 2025-09-20 10:41:36.380468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380478 | orchestrator | Saturday 20 September 2025 10:41:35 +0000 (0:00:00.176) 0:00:19.747 **** 2025-09-20 10:41:36.380489 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380500 | orchestrator | 2025-09-20 10:41:36.380511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380528 | orchestrator | Saturday 20 September 2025 10:41:35 +0000 (0:00:00.181) 0:00:19.929 **** 2025-09-20 10:41:36.380539 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-20 10:41:36.380680 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-20 10:41:36.380694 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-20 10:41:36.380728 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-20 10:41:36.380739 | orchestrator | 2025-09-20 10:41:36.380751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:36.380762 | orchestrator | Saturday 20 September 2025 10:41:36 +0000 (0:00:00.654) 0:00:20.583 **** 2025-09-20 10:41:36.380794 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:36.380805 | orchestrator | 2025-09-20 10:41:36.380826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:42.854478 | orchestrator | Saturday 20 September 2025 10:41:36 +0000 (0:00:00.181) 0:00:20.765 **** 2025-09-20 10:41:42.854587 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.854602 | orchestrator | 2025-09-20 10:41:42.854615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:42.854626 | orchestrator | Saturday 20 September 2025 10:41:36 +0000 (0:00:00.235) 0:00:21.001 **** 2025-09-20 10:41:42.854636 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.854646 | orchestrator | 2025-09-20 10:41:42.854656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:42.854666 | orchestrator | Saturday 20 September 2025 10:41:36 +0000 (0:00:00.338) 0:00:21.339 **** 2025-09-20 10:41:42.854676 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.854773 | orchestrator | 2025-09-20 10:41:42.854784 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-20 10:41:42.854794 | orchestrator | Saturday 20 September 2025 10:41:37 +0000 (0:00:00.184) 0:00:21.524 **** 2025-09-20 10:41:42.854804 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-20 10:41:42.854814 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-20 10:41:42.854823 | orchestrator | 2025-09-20 10:41:42.854833 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-20 10:41:42.854843 | orchestrator | Saturday 20 September 2025 10:41:37 +0000 (0:00:00.422) 0:00:21.946 **** 2025-09-20 10:41:42.854852 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.854862 | orchestrator | 2025-09-20 10:41:42.854872 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-20 10:41:42.854882 | orchestrator | Saturday 20 September 2025 10:41:37 +0000 (0:00:00.128) 0:00:22.075 **** 2025-09-20 10:41:42.854891 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.854901 | orchestrator | 2025-09-20 10:41:42.854911 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-20 10:41:42.854920 | orchestrator | Saturday 20 September 2025 10:41:37 +0000 (0:00:00.121) 0:00:22.196 **** 2025-09-20 10:41:42.854930 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855032 | orchestrator | 2025-09-20 10:41:42.855045 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-20 10:41:42.855056 | orchestrator | Saturday 20 September 2025 10:41:37 +0000 (0:00:00.135) 0:00:22.332 **** 2025-09-20 10:41:42.855067 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:41:42.855078 | orchestrator | 2025-09-20 10:41:42.855089 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-20 10:41:42.855100 | orchestrator | Saturday 20 September 2025 10:41:38 +0000 (0:00:00.147) 0:00:22.480 **** 2025-09-20 10:41:42.855111 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}}) 2025-09-20 10:41:42.855123 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7feb156-b84d-561e-a62b-66fdb35e8084'}}) 2025-09-20 10:41:42.855133 | orchestrator | 2025-09-20 10:41:42.855144 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-20 10:41:42.855176 | orchestrator | Saturday 20 September 2025 10:41:38 +0000 (0:00:00.170) 0:00:22.650 **** 2025-09-20 10:41:42.855189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}})  2025-09-20 10:41:42.855201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7feb156-b84d-561e-a62b-66fdb35e8084'}})  2025-09-20 10:41:42.855212 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855222 | orchestrator | 2025-09-20 10:41:42.855250 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-20 10:41:42.855261 | orchestrator | Saturday 20 September 2025 10:41:38 +0000 (0:00:00.162) 0:00:22.813 **** 2025-09-20 10:41:42.855272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}})  2025-09-20 10:41:42.855283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7feb156-b84d-561e-a62b-66fdb35e8084'}})  2025-09-20 10:41:42.855294 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855304 | orchestrator | 2025-09-20 10:41:42.855315 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-20 10:41:42.855326 | orchestrator | Saturday 20 September 2025 10:41:38 +0000 (0:00:00.164) 0:00:22.977 **** 2025-09-20 10:41:42.855336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}})  2025-09-20 10:41:42.855347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7feb156-b84d-561e-a62b-66fdb35e8084'}})  2025-09-20 10:41:42.855357 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855367 | orchestrator | 2025-09-20 10:41:42.855401 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-20 10:41:42.855412 | orchestrator | Saturday 20 September 2025 10:41:38 +0000 (0:00:00.158) 0:00:23.136 **** 2025-09-20 10:41:42.855421 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:41:42.855431 | orchestrator | 2025-09-20 10:41:42.855441 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-20 10:41:42.855450 | orchestrator | Saturday 20 September 2025 10:41:38 +0000 (0:00:00.158) 0:00:23.294 **** 2025-09-20 10:41:42.855460 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:41:42.855470 | orchestrator | 2025-09-20 10:41:42.855479 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-20 10:41:42.855489 | orchestrator | Saturday 20 September 2025 10:41:39 +0000 (0:00:00.148) 0:00:23.443 **** 2025-09-20 10:41:42.855499 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855508 | orchestrator | 2025-09-20 10:41:42.855538 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-20 10:41:42.855548 | orchestrator | Saturday 20 September 2025 10:41:39 +0000 (0:00:00.170) 0:00:23.614 **** 2025-09-20 10:41:42.855558 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855567 | orchestrator | 2025-09-20 10:41:42.855577 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-20 10:41:42.855586 | orchestrator | Saturday 20 September 2025 10:41:39 +0000 (0:00:00.389) 0:00:24.004 **** 2025-09-20 10:41:42.855596 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855605 | orchestrator | 2025-09-20 10:41:42.855646 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-20 10:41:42.855658 | orchestrator | Saturday 20 September 2025 10:41:39 +0000 (0:00:00.134) 0:00:24.138 **** 2025-09-20 10:41:42.855668 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 10:41:42.855677 | orchestrator |  "ceph_osd_devices": { 2025-09-20 10:41:42.855687 | orchestrator |  "sdb": { 2025-09-20 10:41:42.855698 | orchestrator |  "osd_lvm_uuid": "6a9e85d2-bd62-5d0b-9b06-ebe373b508be" 2025-09-20 10:41:42.855707 | orchestrator |  }, 2025-09-20 10:41:42.855749 | orchestrator |  "sdc": { 2025-09-20 10:41:42.855771 | orchestrator |  "osd_lvm_uuid": "d7feb156-b84d-561e-a62b-66fdb35e8084" 2025-09-20 10:41:42.855781 | orchestrator |  } 2025-09-20 10:41:42.855791 | orchestrator |  } 2025-09-20 10:41:42.855801 | orchestrator | } 2025-09-20 10:41:42.855811 | orchestrator | 2025-09-20 10:41:42.855821 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-20 10:41:42.855831 | orchestrator | Saturday 20 September 2025 10:41:39 +0000 (0:00:00.153) 0:00:24.291 **** 2025-09-20 10:41:42.855840 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855850 | orchestrator | 2025-09-20 10:41:42.855859 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-20 10:41:42.855869 | orchestrator | Saturday 20 September 2025 10:41:40 +0000 (0:00:00.152) 0:00:24.444 **** 2025-09-20 10:41:42.855879 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855888 | orchestrator | 2025-09-20 10:41:42.855898 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-20 10:41:42.855908 | orchestrator | Saturday 20 September 2025 10:41:40 +0000 (0:00:00.124) 0:00:24.568 **** 2025-09-20 10:41:42.855917 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:41:42.855927 | orchestrator | 2025-09-20 10:41:42.856049 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-20 10:41:42.856060 | orchestrator | Saturday 20 September 2025 10:41:40 +0000 (0:00:00.108) 0:00:24.677 **** 2025-09-20 10:41:42.856070 | orchestrator | changed: [testbed-node-4] => { 2025-09-20 10:41:42.856079 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-20 10:41:42.856089 | orchestrator |  "ceph_osd_devices": { 2025-09-20 10:41:42.856099 | orchestrator |  "sdb": { 2025-09-20 10:41:42.856108 | orchestrator |  "osd_lvm_uuid": "6a9e85d2-bd62-5d0b-9b06-ebe373b508be" 2025-09-20 10:41:42.856118 | orchestrator |  }, 2025-09-20 10:41:42.856128 | orchestrator |  "sdc": { 2025-09-20 10:41:42.856137 | orchestrator |  "osd_lvm_uuid": "d7feb156-b84d-561e-a62b-66fdb35e8084" 2025-09-20 10:41:42.856147 | orchestrator |  } 2025-09-20 10:41:42.856156 | orchestrator |  }, 2025-09-20 10:41:42.856166 | orchestrator |  "lvm_volumes": [ 2025-09-20 10:41:42.856176 | orchestrator |  { 2025-09-20 10:41:42.856185 | orchestrator |  "data": "osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be", 2025-09-20 10:41:42.856195 | orchestrator |  "data_vg": "ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be" 2025-09-20 10:41:42.856204 | orchestrator |  }, 2025-09-20 10:41:42.856214 | orchestrator |  { 2025-09-20 10:41:42.856223 | orchestrator |  "data": "osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084", 2025-09-20 10:41:42.856233 | orchestrator |  "data_vg": "ceph-d7feb156-b84d-561e-a62b-66fdb35e8084" 2025-09-20 10:41:42.856242 | orchestrator |  } 2025-09-20 10:41:42.856252 | orchestrator |  ] 2025-09-20 10:41:42.856262 | orchestrator |  } 2025-09-20 10:41:42.856271 | orchestrator | } 2025-09-20 10:41:42.856281 | orchestrator | 2025-09-20 10:41:42.856290 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-20 10:41:42.856300 | orchestrator | Saturday 20 September 2025 10:41:40 +0000 (0:00:00.189) 0:00:24.866 **** 2025-09-20 10:41:42.856309 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-20 10:41:42.856319 | orchestrator | 2025-09-20 10:41:42.856328 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-20 10:41:42.856338 | orchestrator | 2025-09-20 10:41:42.856347 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:41:42.856357 | orchestrator | Saturday 20 September 2025 10:41:41 +0000 (0:00:00.949) 0:00:25.815 **** 2025-09-20 10:41:42.856366 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-20 10:41:42.856406 | orchestrator | 2025-09-20 10:41:42.856416 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 10:41:42.856425 | orchestrator | Saturday 20 September 2025 10:41:41 +0000 (0:00:00.406) 0:00:26.222 **** 2025-09-20 10:41:42.856443 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:41:42.856453 | orchestrator | 2025-09-20 10:41:42.856469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:42.856479 | orchestrator | Saturday 20 September 2025 10:41:42 +0000 (0:00:00.582) 0:00:26.804 **** 2025-09-20 10:41:42.856489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-20 10:41:42.856498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-20 10:41:42.856508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-20 10:41:42.856517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-20 10:41:42.856527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-20 10:41:42.856536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-20 10:41:42.856553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-20 10:41:50.295496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-20 10:41:50.295604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-20 10:41:50.295620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-20 10:41:50.295632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-20 10:41:50.295642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-20 10:41:50.295653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-20 10:41:50.295664 | orchestrator | 2025-09-20 10:41:50.295678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295690 | orchestrator | Saturday 20 September 2025 10:41:42 +0000 (0:00:00.432) 0:00:27.237 **** 2025-09-20 10:41:50.295701 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295713 | orchestrator | 2025-09-20 10:41:50.295724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295735 | orchestrator | Saturday 20 September 2025 10:41:43 +0000 (0:00:00.212) 0:00:27.449 **** 2025-09-20 10:41:50.295746 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295756 | orchestrator | 2025-09-20 10:41:50.295767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295778 | orchestrator | Saturday 20 September 2025 10:41:43 +0000 (0:00:00.171) 0:00:27.621 **** 2025-09-20 10:41:50.295789 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295799 | orchestrator | 2025-09-20 10:41:50.295810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295821 | orchestrator | Saturday 20 September 2025 10:41:43 +0000 (0:00:00.185) 0:00:27.807 **** 2025-09-20 10:41:50.295832 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295842 | orchestrator | 2025-09-20 10:41:50.295853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295864 | orchestrator | Saturday 20 September 2025 10:41:43 +0000 (0:00:00.181) 0:00:27.989 **** 2025-09-20 10:41:50.295875 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295885 | orchestrator | 2025-09-20 10:41:50.295896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295907 | orchestrator | Saturday 20 September 2025 10:41:43 +0000 (0:00:00.174) 0:00:28.163 **** 2025-09-20 10:41:50.295918 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295928 | orchestrator | 2025-09-20 10:41:50.295939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.295950 | orchestrator | Saturday 20 September 2025 10:41:43 +0000 (0:00:00.211) 0:00:28.374 **** 2025-09-20 10:41:50.295961 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.295991 | orchestrator | 2025-09-20 10:41:50.296005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.296017 | orchestrator | Saturday 20 September 2025 10:41:44 +0000 (0:00:00.185) 0:00:28.560 **** 2025-09-20 10:41:50.296029 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296040 | orchestrator | 2025-09-20 10:41:50.296052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.296064 | orchestrator | Saturday 20 September 2025 10:41:44 +0000 (0:00:00.178) 0:00:28.738 **** 2025-09-20 10:41:50.296076 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d) 2025-09-20 10:41:50.296089 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d) 2025-09-20 10:41:50.296101 | orchestrator | 2025-09-20 10:41:50.296113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.296125 | orchestrator | Saturday 20 September 2025 10:41:44 +0000 (0:00:00.535) 0:00:29.274 **** 2025-09-20 10:41:50.296137 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b) 2025-09-20 10:41:50.296149 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b) 2025-09-20 10:41:50.296161 | orchestrator | 2025-09-20 10:41:50.296173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.296185 | orchestrator | Saturday 20 September 2025 10:41:45 +0000 (0:00:00.702) 0:00:29.976 **** 2025-09-20 10:41:50.296197 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8) 2025-09-20 10:41:50.296209 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8) 2025-09-20 10:41:50.296221 | orchestrator | 2025-09-20 10:41:50.296233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.296244 | orchestrator | Saturday 20 September 2025 10:41:45 +0000 (0:00:00.396) 0:00:30.372 **** 2025-09-20 10:41:50.296256 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b) 2025-09-20 10:41:50.296268 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b) 2025-09-20 10:41:50.296280 | orchestrator | 2025-09-20 10:41:50.296292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:41:50.296304 | orchestrator | Saturday 20 September 2025 10:41:46 +0000 (0:00:00.423) 0:00:30.796 **** 2025-09-20 10:41:50.296316 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 10:41:50.296328 | orchestrator | 2025-09-20 10:41:50.296340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296351 | orchestrator | Saturday 20 September 2025 10:41:46 +0000 (0:00:00.307) 0:00:31.104 **** 2025-09-20 10:41:50.296401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-20 10:41:50.296414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-20 10:41:50.296424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-20 10:41:50.296435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-20 10:41:50.296445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-20 10:41:50.296456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-20 10:41:50.296477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-20 10:41:50.296489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-20 10:41:50.296500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-20 10:41:50.296518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-20 10:41:50.296529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-20 10:41:50.296540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-20 10:41:50.296551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-20 10:41:50.296561 | orchestrator | 2025-09-20 10:41:50.296572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296583 | orchestrator | Saturday 20 September 2025 10:41:47 +0000 (0:00:00.471) 0:00:31.576 **** 2025-09-20 10:41:50.296594 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296604 | orchestrator | 2025-09-20 10:41:50.296615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296626 | orchestrator | Saturday 20 September 2025 10:41:47 +0000 (0:00:00.148) 0:00:31.724 **** 2025-09-20 10:41:50.296636 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296647 | orchestrator | 2025-09-20 10:41:50.296658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296669 | orchestrator | Saturday 20 September 2025 10:41:47 +0000 (0:00:00.142) 0:00:31.866 **** 2025-09-20 10:41:50.296679 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296690 | orchestrator | 2025-09-20 10:41:50.296705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296716 | orchestrator | Saturday 20 September 2025 10:41:47 +0000 (0:00:00.141) 0:00:32.008 **** 2025-09-20 10:41:50.296727 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296737 | orchestrator | 2025-09-20 10:41:50.296748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296759 | orchestrator | Saturday 20 September 2025 10:41:47 +0000 (0:00:00.198) 0:00:32.206 **** 2025-09-20 10:41:50.296769 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296780 | orchestrator | 2025-09-20 10:41:50.296790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296801 | orchestrator | Saturday 20 September 2025 10:41:48 +0000 (0:00:00.231) 0:00:32.438 **** 2025-09-20 10:41:50.296812 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296823 | orchestrator | 2025-09-20 10:41:50.296833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296844 | orchestrator | Saturday 20 September 2025 10:41:48 +0000 (0:00:00.569) 0:00:33.007 **** 2025-09-20 10:41:50.296855 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296865 | orchestrator | 2025-09-20 10:41:50.296876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296891 | orchestrator | Saturday 20 September 2025 10:41:48 +0000 (0:00:00.167) 0:00:33.175 **** 2025-09-20 10:41:50.296909 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.296927 | orchestrator | 2025-09-20 10:41:50.296956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.296974 | orchestrator | Saturday 20 September 2025 10:41:48 +0000 (0:00:00.159) 0:00:33.335 **** 2025-09-20 10:41:50.296990 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-20 10:41:50.297008 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-20 10:41:50.297025 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-20 10:41:50.297043 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-20 10:41:50.297061 | orchestrator | 2025-09-20 10:41:50.297079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.297099 | orchestrator | Saturday 20 September 2025 10:41:49 +0000 (0:00:00.582) 0:00:33.918 **** 2025-09-20 10:41:50.297117 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.297133 | orchestrator | 2025-09-20 10:41:50.297145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.297165 | orchestrator | Saturday 20 September 2025 10:41:49 +0000 (0:00:00.181) 0:00:34.100 **** 2025-09-20 10:41:50.297176 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.297187 | orchestrator | 2025-09-20 10:41:50.297197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.297208 | orchestrator | Saturday 20 September 2025 10:41:49 +0000 (0:00:00.212) 0:00:34.313 **** 2025-09-20 10:41:50.297219 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.297230 | orchestrator | 2025-09-20 10:41:50.297241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:41:50.297251 | orchestrator | Saturday 20 September 2025 10:41:50 +0000 (0:00:00.187) 0:00:34.500 **** 2025-09-20 10:41:50.297262 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:50.297273 | orchestrator | 2025-09-20 10:41:50.297287 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-20 10:41:50.297317 | orchestrator | Saturday 20 September 2025 10:41:50 +0000 (0:00:00.180) 0:00:34.681 **** 2025-09-20 10:41:53.991757 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-20 10:41:53.991868 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-20 10:41:53.991883 | orchestrator | 2025-09-20 10:41:53.991896 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-20 10:41:53.991907 | orchestrator | Saturday 20 September 2025 10:41:50 +0000 (0:00:00.156) 0:00:34.838 **** 2025-09-20 10:41:53.991919 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.991930 | orchestrator | 2025-09-20 10:41:53.991942 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-20 10:41:53.991953 | orchestrator | Saturday 20 September 2025 10:41:50 +0000 (0:00:00.107) 0:00:34.945 **** 2025-09-20 10:41:53.991963 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.991974 | orchestrator | 2025-09-20 10:41:53.991985 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-20 10:41:53.991995 | orchestrator | Saturday 20 September 2025 10:41:50 +0000 (0:00:00.148) 0:00:35.093 **** 2025-09-20 10:41:53.992006 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992017 | orchestrator | 2025-09-20 10:41:53.992027 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-20 10:41:53.992038 | orchestrator | Saturday 20 September 2025 10:41:50 +0000 (0:00:00.128) 0:00:35.222 **** 2025-09-20 10:41:53.992049 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:41:53.992061 | orchestrator | 2025-09-20 10:41:53.992072 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-20 10:41:53.992083 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.235) 0:00:35.457 **** 2025-09-20 10:41:53.992095 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43c75cb2-27fe-5978-b049-f1a35c211e19'}}) 2025-09-20 10:41:53.992106 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}}) 2025-09-20 10:41:53.992117 | orchestrator | 2025-09-20 10:41:53.992128 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-20 10:41:53.992138 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.159) 0:00:35.617 **** 2025-09-20 10:41:53.992150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43c75cb2-27fe-5978-b049-f1a35c211e19'}})  2025-09-20 10:41:53.992162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}})  2025-09-20 10:41:53.992172 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992183 | orchestrator | 2025-09-20 10:41:53.992194 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-20 10:41:53.992205 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.132) 0:00:35.749 **** 2025-09-20 10:41:53.992216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43c75cb2-27fe-5978-b049-f1a35c211e19'}})  2025-09-20 10:41:53.992251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}})  2025-09-20 10:41:53.992263 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992274 | orchestrator | 2025-09-20 10:41:53.992285 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-20 10:41:53.992298 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.112) 0:00:35.861 **** 2025-09-20 10:41:53.992310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43c75cb2-27fe-5978-b049-f1a35c211e19'}})  2025-09-20 10:41:53.992340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}})  2025-09-20 10:41:53.992354 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992366 | orchestrator | 2025-09-20 10:41:53.992405 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-20 10:41:53.992417 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.138) 0:00:36.000 **** 2025-09-20 10:41:53.992429 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:41:53.992441 | orchestrator | 2025-09-20 10:41:53.992453 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-20 10:41:53.992465 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.119) 0:00:36.119 **** 2025-09-20 10:41:53.992476 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:41:53.992488 | orchestrator | 2025-09-20 10:41:53.992501 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-20 10:41:53.992513 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.127) 0:00:36.247 **** 2025-09-20 10:41:53.992525 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992537 | orchestrator | 2025-09-20 10:41:53.992549 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-20 10:41:53.992562 | orchestrator | Saturday 20 September 2025 10:41:51 +0000 (0:00:00.126) 0:00:36.373 **** 2025-09-20 10:41:53.992574 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992586 | orchestrator | 2025-09-20 10:41:53.992598 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-20 10:41:53.992610 | orchestrator | Saturday 20 September 2025 10:41:52 +0000 (0:00:00.128) 0:00:36.502 **** 2025-09-20 10:41:53.992621 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992633 | orchestrator | 2025-09-20 10:41:53.992645 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-20 10:41:53.992658 | orchestrator | Saturday 20 September 2025 10:41:52 +0000 (0:00:00.135) 0:00:36.637 **** 2025-09-20 10:41:53.992670 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 10:41:53.992681 | orchestrator |  "ceph_osd_devices": { 2025-09-20 10:41:53.992692 | orchestrator |  "sdb": { 2025-09-20 10:41:53.992703 | orchestrator |  "osd_lvm_uuid": "43c75cb2-27fe-5978-b049-f1a35c211e19" 2025-09-20 10:41:53.992732 | orchestrator |  }, 2025-09-20 10:41:53.992744 | orchestrator |  "sdc": { 2025-09-20 10:41:53.992755 | orchestrator |  "osd_lvm_uuid": "f41c3a47-393d-5abf-86b9-e0c2e1b7064d" 2025-09-20 10:41:53.992767 | orchestrator |  } 2025-09-20 10:41:53.992778 | orchestrator |  } 2025-09-20 10:41:53.992789 | orchestrator | } 2025-09-20 10:41:53.992801 | orchestrator | 2025-09-20 10:41:53.992812 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-20 10:41:53.992823 | orchestrator | Saturday 20 September 2025 10:41:52 +0000 (0:00:00.135) 0:00:36.773 **** 2025-09-20 10:41:53.992834 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992844 | orchestrator | 2025-09-20 10:41:53.992855 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-20 10:41:53.992866 | orchestrator | Saturday 20 September 2025 10:41:52 +0000 (0:00:00.096) 0:00:36.869 **** 2025-09-20 10:41:53.992877 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992887 | orchestrator | 2025-09-20 10:41:53.992898 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-20 10:41:53.992921 | orchestrator | Saturday 20 September 2025 10:41:52 +0000 (0:00:00.231) 0:00:37.101 **** 2025-09-20 10:41:53.992932 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:41:53.992943 | orchestrator | 2025-09-20 10:41:53.992953 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-20 10:41:53.992964 | orchestrator | Saturday 20 September 2025 10:41:52 +0000 (0:00:00.135) 0:00:37.236 **** 2025-09-20 10:41:53.992975 | orchestrator | changed: [testbed-node-5] => { 2025-09-20 10:41:53.992986 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-20 10:41:53.992997 | orchestrator |  "ceph_osd_devices": { 2025-09-20 10:41:53.993008 | orchestrator |  "sdb": { 2025-09-20 10:41:53.993018 | orchestrator |  "osd_lvm_uuid": "43c75cb2-27fe-5978-b049-f1a35c211e19" 2025-09-20 10:41:53.993029 | orchestrator |  }, 2025-09-20 10:41:53.993040 | orchestrator |  "sdc": { 2025-09-20 10:41:53.993051 | orchestrator |  "osd_lvm_uuid": "f41c3a47-393d-5abf-86b9-e0c2e1b7064d" 2025-09-20 10:41:53.993062 | orchestrator |  } 2025-09-20 10:41:53.993072 | orchestrator |  }, 2025-09-20 10:41:53.993083 | orchestrator |  "lvm_volumes": [ 2025-09-20 10:41:53.993094 | orchestrator |  { 2025-09-20 10:41:53.993105 | orchestrator |  "data": "osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19", 2025-09-20 10:41:53.993116 | orchestrator |  "data_vg": "ceph-43c75cb2-27fe-5978-b049-f1a35c211e19" 2025-09-20 10:41:53.993126 | orchestrator |  }, 2025-09-20 10:41:53.993137 | orchestrator |  { 2025-09-20 10:41:53.993148 | orchestrator |  "data": "osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d", 2025-09-20 10:41:53.993159 | orchestrator |  "data_vg": "ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d" 2025-09-20 10:41:53.993170 | orchestrator |  } 2025-09-20 10:41:53.993181 | orchestrator |  ] 2025-09-20 10:41:53.993192 | orchestrator |  } 2025-09-20 10:41:53.993207 | orchestrator | } 2025-09-20 10:41:53.993218 | orchestrator | 2025-09-20 10:41:53.993229 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-20 10:41:53.993240 | orchestrator | Saturday 20 September 2025 10:41:53 +0000 (0:00:00.169) 0:00:37.406 **** 2025-09-20 10:41:53.993251 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-20 10:41:53.993262 | orchestrator | 2025-09-20 10:41:53.993273 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:41:53.993284 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 10:41:53.993297 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 10:41:53.993308 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 10:41:53.993319 | orchestrator | 2025-09-20 10:41:53.993330 | orchestrator | 2025-09-20 10:41:53.993341 | orchestrator | 2025-09-20 10:41:53.993352 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:41:53.993363 | orchestrator | Saturday 20 September 2025 10:41:53 +0000 (0:00:00.950) 0:00:38.356 **** 2025-09-20 10:41:53.993389 | orchestrator | =============================================================================== 2025-09-20 10:41:53.993400 | orchestrator | Write configuration file ------------------------------------------------ 3.78s 2025-09-20 10:41:53.993411 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-09-20 10:41:53.993422 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2025-09-20 10:41:53.993433 | orchestrator | Get initial list of available block devices ----------------------------- 1.02s 2025-09-20 10:41:53.993443 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2025-09-20 10:41:53.993461 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-09-20 10:41:53.993472 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.74s 2025-09-20 10:41:53.993483 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-09-20 10:41:53.993494 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-20 10:41:53.993505 | orchestrator | Set WAL devices config data --------------------------------------------- 0.65s 2025-09-20 10:41:53.993516 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-09-20 10:41:53.993526 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-09-20 10:41:53.993537 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-09-20 10:41:53.993548 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.56s 2025-09-20 10:41:53.993566 | orchestrator | Print configuration data ------------------------------------------------ 0.54s 2025-09-20 10:41:54.332658 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-09-20 10:41:54.332764 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-09-20 10:41:54.332779 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.51s 2025-09-20 10:41:54.332790 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2025-09-20 10:41:54.332801 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.50s 2025-09-20 10:42:17.308075 | orchestrator | 2025-09-20 10:42:17 | INFO  | Task 1edbb33b-5b86-42fe-91ea-6bf47d46b0e1 (sync inventory) is running in background. Output coming soon. 2025-09-20 10:42:40.731675 | orchestrator | 2025-09-20 10:42:18 | INFO  | Starting group_vars file reorganization 2025-09-20 10:42:40.731796 | orchestrator | 2025-09-20 10:42:18 | INFO  | Moved 0 file(s) to their respective directories 2025-09-20 10:42:40.731813 | orchestrator | 2025-09-20 10:42:18 | INFO  | Group_vars file reorganization completed 2025-09-20 10:42:40.731825 | orchestrator | 2025-09-20 10:42:21 | INFO  | Starting variable preparation from inventory 2025-09-20 10:42:40.731836 | orchestrator | 2025-09-20 10:42:23 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-20 10:42:40.731848 | orchestrator | 2025-09-20 10:42:23 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-20 10:42:40.731859 | orchestrator | 2025-09-20 10:42:23 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-20 10:42:40.731891 | orchestrator | 2025-09-20 10:42:23 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-20 10:42:40.731904 | orchestrator | 2025-09-20 10:42:23 | INFO  | Variable preparation completed 2025-09-20 10:42:40.731915 | orchestrator | 2025-09-20 10:42:24 | INFO  | Starting inventory overwrite handling 2025-09-20 10:42:40.731926 | orchestrator | 2025-09-20 10:42:24 | INFO  | Handling group overwrites in 99-overwrite 2025-09-20 10:42:40.731946 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removing group frr:children from 60-generic 2025-09-20 10:42:40.731957 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removing group storage:children from 50-kolla 2025-09-20 10:42:40.731968 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removing group netbird:children from 50-infrastructure 2025-09-20 10:42:40.731979 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-20 10:42:40.731990 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-20 10:42:40.732001 | orchestrator | 2025-09-20 10:42:24 | INFO  | Handling group overwrites in 20-roles 2025-09-20 10:42:40.732012 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removing group k3s_node from 50-infrastructure 2025-09-20 10:42:40.732047 | orchestrator | 2025-09-20 10:42:24 | INFO  | Removed 6 group(s) in total 2025-09-20 10:42:40.732059 | orchestrator | 2025-09-20 10:42:24 | INFO  | Inventory overwrite handling completed 2025-09-20 10:42:40.732070 | orchestrator | 2025-09-20 10:42:25 | INFO  | Starting merge of inventory files 2025-09-20 10:42:40.732081 | orchestrator | 2025-09-20 10:42:25 | INFO  | Inventory files merged successfully 2025-09-20 10:42:40.732092 | orchestrator | 2025-09-20 10:42:29 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-20 10:42:40.732103 | orchestrator | 2025-09-20 10:42:39 | INFO  | Successfully wrote ClusterShell configuration 2025-09-20 10:42:40.732114 | orchestrator | [master 07c56b2] 2025-09-20-10-42 2025-09-20 10:42:40.732126 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-20 10:42:43.027467 | orchestrator | 2025-09-20 10:42:43 | INFO  | Task e6892305-4e04-4865-8136-ebff54d628fd (ceph-create-lvm-devices) was prepared for execution. 2025-09-20 10:42:43.027563 | orchestrator | 2025-09-20 10:42:43 | INFO  | It takes a moment until task e6892305-4e04-4865-8136-ebff54d628fd (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-20 10:42:54.585190 | orchestrator | 2025-09-20 10:42:54.585329 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-20 10:42:54.585347 | orchestrator | 2025-09-20 10:42:54.585360 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:42:54.585373 | orchestrator | Saturday 20 September 2025 10:42:46 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-20 10:42:54.585407 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:42:54.585419 | orchestrator | 2025-09-20 10:42:54.585441 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 10:42:54.585453 | orchestrator | Saturday 20 September 2025 10:42:47 +0000 (0:00:00.259) 0:00:00.536 **** 2025-09-20 10:42:54.585465 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:42:54.585478 | orchestrator | 2025-09-20 10:42:54.585489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585501 | orchestrator | Saturday 20 September 2025 10:42:47 +0000 (0:00:00.229) 0:00:00.766 **** 2025-09-20 10:42:54.585512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-20 10:42:54.585525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-20 10:42:54.585536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-20 10:42:54.585547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-20 10:42:54.585558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-20 10:42:54.585569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-20 10:42:54.585580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-20 10:42:54.585590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-20 10:42:54.585602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-20 10:42:54.585613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-20 10:42:54.585624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-20 10:42:54.585635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-20 10:42:54.585646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-20 10:42:54.585657 | orchestrator | 2025-09-20 10:42:54.585668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585702 | orchestrator | Saturday 20 September 2025 10:42:47 +0000 (0:00:00.405) 0:00:01.172 **** 2025-09-20 10:42:54.585714 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.585727 | orchestrator | 2025-09-20 10:42:54.585739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585751 | orchestrator | Saturday 20 September 2025 10:42:48 +0000 (0:00:00.370) 0:00:01.542 **** 2025-09-20 10:42:54.585763 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.585775 | orchestrator | 2025-09-20 10:42:54.585788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585799 | orchestrator | Saturday 20 September 2025 10:42:48 +0000 (0:00:00.169) 0:00:01.712 **** 2025-09-20 10:42:54.585812 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.585824 | orchestrator | 2025-09-20 10:42:54.585836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585849 | orchestrator | Saturday 20 September 2025 10:42:48 +0000 (0:00:00.173) 0:00:01.885 **** 2025-09-20 10:42:54.585861 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.585873 | orchestrator | 2025-09-20 10:42:54.585885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585898 | orchestrator | Saturday 20 September 2025 10:42:48 +0000 (0:00:00.179) 0:00:02.064 **** 2025-09-20 10:42:54.585910 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.585922 | orchestrator | 2025-09-20 10:42:54.585935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585947 | orchestrator | Saturday 20 September 2025 10:42:48 +0000 (0:00:00.188) 0:00:02.253 **** 2025-09-20 10:42:54.585958 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.585969 | orchestrator | 2025-09-20 10:42:54.585980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.585991 | orchestrator | Saturday 20 September 2025 10:42:49 +0000 (0:00:00.192) 0:00:02.445 **** 2025-09-20 10:42:54.586002 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586013 | orchestrator | 2025-09-20 10:42:54.586082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.586094 | orchestrator | Saturday 20 September 2025 10:42:49 +0000 (0:00:00.180) 0:00:02.626 **** 2025-09-20 10:42:54.586105 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586116 | orchestrator | 2025-09-20 10:42:54.586127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.586138 | orchestrator | Saturday 20 September 2025 10:42:49 +0000 (0:00:00.198) 0:00:02.824 **** 2025-09-20 10:42:54.586149 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d) 2025-09-20 10:42:54.586161 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d) 2025-09-20 10:42:54.586172 | orchestrator | 2025-09-20 10:42:54.586183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.586194 | orchestrator | Saturday 20 September 2025 10:42:49 +0000 (0:00:00.368) 0:00:03.193 **** 2025-09-20 10:42:54.586225 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf) 2025-09-20 10:42:54.586237 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf) 2025-09-20 10:42:54.586248 | orchestrator | 2025-09-20 10:42:54.586259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.586270 | orchestrator | Saturday 20 September 2025 10:42:50 +0000 (0:00:00.365) 0:00:03.559 **** 2025-09-20 10:42:54.586280 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949) 2025-09-20 10:42:54.586292 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949) 2025-09-20 10:42:54.586303 | orchestrator | 2025-09-20 10:42:54.586314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.586335 | orchestrator | Saturday 20 September 2025 10:42:50 +0000 (0:00:00.722) 0:00:04.281 **** 2025-09-20 10:42:54.586345 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f) 2025-09-20 10:42:54.586356 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f) 2025-09-20 10:42:54.586367 | orchestrator | 2025-09-20 10:42:54.586407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:42:54.586419 | orchestrator | Saturday 20 September 2025 10:42:52 +0000 (0:00:01.149) 0:00:05.430 **** 2025-09-20 10:42:54.586430 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 10:42:54.586441 | orchestrator | 2025-09-20 10:42:54.586452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586463 | orchestrator | Saturday 20 September 2025 10:42:52 +0000 (0:00:00.355) 0:00:05.786 **** 2025-09-20 10:42:54.586474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-20 10:42:54.586484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-20 10:42:54.586495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-20 10:42:54.586505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-20 10:42:54.586535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-20 10:42:54.586546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-20 10:42:54.586557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-20 10:42:54.586568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-20 10:42:54.586579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-20 10:42:54.586590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-20 10:42:54.586601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-20 10:42:54.586611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-20 10:42:54.586627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-20 10:42:54.586638 | orchestrator | 2025-09-20 10:42:54.586649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586660 | orchestrator | Saturday 20 September 2025 10:42:52 +0000 (0:00:00.433) 0:00:06.219 **** 2025-09-20 10:42:54.586671 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586681 | orchestrator | 2025-09-20 10:42:54.586692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586703 | orchestrator | Saturday 20 September 2025 10:42:53 +0000 (0:00:00.196) 0:00:06.415 **** 2025-09-20 10:42:54.586714 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586724 | orchestrator | 2025-09-20 10:42:54.586735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586746 | orchestrator | Saturday 20 September 2025 10:42:53 +0000 (0:00:00.225) 0:00:06.641 **** 2025-09-20 10:42:54.586756 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586767 | orchestrator | 2025-09-20 10:42:54.586778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586789 | orchestrator | Saturday 20 September 2025 10:42:53 +0000 (0:00:00.215) 0:00:06.857 **** 2025-09-20 10:42:54.586800 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586810 | orchestrator | 2025-09-20 10:42:54.586821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586840 | orchestrator | Saturday 20 September 2025 10:42:53 +0000 (0:00:00.243) 0:00:07.101 **** 2025-09-20 10:42:54.586851 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586862 | orchestrator | 2025-09-20 10:42:54.586873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586884 | orchestrator | Saturday 20 September 2025 10:42:53 +0000 (0:00:00.204) 0:00:07.305 **** 2025-09-20 10:42:54.586894 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586905 | orchestrator | 2025-09-20 10:42:54.586916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586926 | orchestrator | Saturday 20 September 2025 10:42:54 +0000 (0:00:00.254) 0:00:07.560 **** 2025-09-20 10:42:54.586937 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:42:54.586948 | orchestrator | 2025-09-20 10:42:54.586959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:42:54.586970 | orchestrator | Saturday 20 September 2025 10:42:54 +0000 (0:00:00.204) 0:00:07.765 **** 2025-09-20 10:42:54.586987 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.029502 | orchestrator | 2025-09-20 10:43:02.029617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:02.029634 | orchestrator | Saturday 20 September 2025 10:42:54 +0000 (0:00:00.174) 0:00:07.939 **** 2025-09-20 10:43:02.029646 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-20 10:43:02.029659 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-20 10:43:02.029671 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-20 10:43:02.029682 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-20 10:43:02.029693 | orchestrator | 2025-09-20 10:43:02.029705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:02.029716 | orchestrator | Saturday 20 September 2025 10:42:55 +0000 (0:00:00.927) 0:00:08.867 **** 2025-09-20 10:43:02.029727 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.029738 | orchestrator | 2025-09-20 10:43:02.029749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:02.029760 | orchestrator | Saturday 20 September 2025 10:42:55 +0000 (0:00:00.198) 0:00:09.066 **** 2025-09-20 10:43:02.029771 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.029781 | orchestrator | 2025-09-20 10:43:02.029792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:02.029803 | orchestrator | Saturday 20 September 2025 10:42:55 +0000 (0:00:00.195) 0:00:09.261 **** 2025-09-20 10:43:02.029814 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.029825 | orchestrator | 2025-09-20 10:43:02.029837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:02.029848 | orchestrator | Saturday 20 September 2025 10:42:56 +0000 (0:00:00.172) 0:00:09.433 **** 2025-09-20 10:43:02.029859 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.029869 | orchestrator | 2025-09-20 10:43:02.029881 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-20 10:43:02.029892 | orchestrator | Saturday 20 September 2025 10:42:56 +0000 (0:00:00.173) 0:00:09.607 **** 2025-09-20 10:43:02.029903 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.029913 | orchestrator | 2025-09-20 10:43:02.029924 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-20 10:43:02.029935 | orchestrator | Saturday 20 September 2025 10:42:56 +0000 (0:00:00.153) 0:00:09.761 **** 2025-09-20 10:43:02.029947 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8bfbaad6-401f-511d-91f2-acbf67028504'}}) 2025-09-20 10:43:02.029958 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44b8c0b1-de10-587f-a252-374190a68e04'}}) 2025-09-20 10:43:02.029969 | orchestrator | 2025-09-20 10:43:02.029980 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-20 10:43:02.029991 | orchestrator | Saturday 20 September 2025 10:42:56 +0000 (0:00:00.175) 0:00:09.937 **** 2025-09-20 10:43:02.030003 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'}) 2025-09-20 10:43:02.030086 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'}) 2025-09-20 10:43:02.030100 | orchestrator | 2025-09-20 10:43:02.030111 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-20 10:43:02.030123 | orchestrator | Saturday 20 September 2025 10:42:58 +0000 (0:00:01.938) 0:00:11.875 **** 2025-09-20 10:43:02.030134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.030147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.030158 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030168 | orchestrator | 2025-09-20 10:43:02.030179 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-20 10:43:02.030190 | orchestrator | Saturday 20 September 2025 10:42:58 +0000 (0:00:00.173) 0:00:12.049 **** 2025-09-20 10:43:02.030201 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'}) 2025-09-20 10:43:02.030212 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'}) 2025-09-20 10:43:02.030223 | orchestrator | 2025-09-20 10:43:02.030234 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-20 10:43:02.030245 | orchestrator | Saturday 20 September 2025 10:43:00 +0000 (0:00:01.441) 0:00:13.491 **** 2025-09-20 10:43:02.030256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.030267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.030278 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030289 | orchestrator | 2025-09-20 10:43:02.030306 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-20 10:43:02.030356 | orchestrator | Saturday 20 September 2025 10:43:00 +0000 (0:00:00.144) 0:00:13.635 **** 2025-09-20 10:43:02.030375 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030420 | orchestrator | 2025-09-20 10:43:02.030437 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-20 10:43:02.030479 | orchestrator | Saturday 20 September 2025 10:43:00 +0000 (0:00:00.134) 0:00:13.770 **** 2025-09-20 10:43:02.030498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.030516 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.030534 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030552 | orchestrator | 2025-09-20 10:43:02.030570 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-20 10:43:02.030589 | orchestrator | Saturday 20 September 2025 10:43:00 +0000 (0:00:00.261) 0:00:14.032 **** 2025-09-20 10:43:02.030606 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030622 | orchestrator | 2025-09-20 10:43:02.030641 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-20 10:43:02.030660 | orchestrator | Saturday 20 September 2025 10:43:00 +0000 (0:00:00.122) 0:00:14.154 **** 2025-09-20 10:43:02.030678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.030713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.030733 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030752 | orchestrator | 2025-09-20 10:43:02.030767 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-20 10:43:02.030778 | orchestrator | Saturday 20 September 2025 10:43:00 +0000 (0:00:00.135) 0:00:14.290 **** 2025-09-20 10:43:02.030789 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030800 | orchestrator | 2025-09-20 10:43:02.030811 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-20 10:43:02.030822 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.133) 0:00:14.424 **** 2025-09-20 10:43:02.030832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.030844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.030854 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.030865 | orchestrator | 2025-09-20 10:43:02.030876 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-20 10:43:02.030887 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.142) 0:00:14.566 **** 2025-09-20 10:43:02.030898 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:02.030910 | orchestrator | 2025-09-20 10:43:02.030921 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-20 10:43:02.030932 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.122) 0:00:14.689 **** 2025-09-20 10:43:02.030968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.030980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.030991 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.031002 | orchestrator | 2025-09-20 10:43:02.031013 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-20 10:43:02.031024 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.139) 0:00:14.829 **** 2025-09-20 10:43:02.031034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.031046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.031057 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.031068 | orchestrator | 2025-09-20 10:43:02.031078 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-20 10:43:02.031089 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.146) 0:00:14.975 **** 2025-09-20 10:43:02.031100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:02.031111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:02.031122 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.031133 | orchestrator | 2025-09-20 10:43:02.031144 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-20 10:43:02.031155 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.147) 0:00:15.123 **** 2025-09-20 10:43:02.031166 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.031185 | orchestrator | 2025-09-20 10:43:02.031196 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-20 10:43:02.031207 | orchestrator | Saturday 20 September 2025 10:43:01 +0000 (0:00:00.122) 0:00:15.246 **** 2025-09-20 10:43:02.031218 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:02.031229 | orchestrator | 2025-09-20 10:43:02.031250 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-20 10:43:07.798075 | orchestrator | Saturday 20 September 2025 10:43:02 +0000 (0:00:00.139) 0:00:15.385 **** 2025-09-20 10:43:07.798191 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.798208 | orchestrator | 2025-09-20 10:43:07.798221 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-20 10:43:07.798233 | orchestrator | Saturday 20 September 2025 10:43:02 +0000 (0:00:00.116) 0:00:15.501 **** 2025-09-20 10:43:07.798244 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 10:43:07.798256 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-20 10:43:07.798267 | orchestrator | } 2025-09-20 10:43:07.798279 | orchestrator | 2025-09-20 10:43:07.798296 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-20 10:43:07.798314 | orchestrator | Saturday 20 September 2025 10:43:02 +0000 (0:00:00.274) 0:00:15.776 **** 2025-09-20 10:43:07.798332 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 10:43:07.798351 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-20 10:43:07.798368 | orchestrator | } 2025-09-20 10:43:07.798457 | orchestrator | 2025-09-20 10:43:07.798476 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-20 10:43:07.798488 | orchestrator | Saturday 20 September 2025 10:43:02 +0000 (0:00:00.163) 0:00:15.939 **** 2025-09-20 10:43:07.798499 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 10:43:07.798510 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-20 10:43:07.798522 | orchestrator | } 2025-09-20 10:43:07.798534 | orchestrator | 2025-09-20 10:43:07.798546 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-20 10:43:07.798559 | orchestrator | Saturday 20 September 2025 10:43:02 +0000 (0:00:00.136) 0:00:16.075 **** 2025-09-20 10:43:07.798572 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:07.798584 | orchestrator | 2025-09-20 10:43:07.798597 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-20 10:43:07.798610 | orchestrator | Saturday 20 September 2025 10:43:03 +0000 (0:00:00.638) 0:00:16.714 **** 2025-09-20 10:43:07.798622 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:07.798633 | orchestrator | 2025-09-20 10:43:07.798644 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-20 10:43:07.798655 | orchestrator | Saturday 20 September 2025 10:43:03 +0000 (0:00:00.494) 0:00:17.208 **** 2025-09-20 10:43:07.798666 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:07.798677 | orchestrator | 2025-09-20 10:43:07.798688 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-20 10:43:07.798699 | orchestrator | Saturday 20 September 2025 10:43:04 +0000 (0:00:00.526) 0:00:17.735 **** 2025-09-20 10:43:07.798710 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:07.798721 | orchestrator | 2025-09-20 10:43:07.798732 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-20 10:43:07.798743 | orchestrator | Saturday 20 September 2025 10:43:04 +0000 (0:00:00.167) 0:00:17.902 **** 2025-09-20 10:43:07.798754 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.798765 | orchestrator | 2025-09-20 10:43:07.798776 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-20 10:43:07.798788 | orchestrator | Saturday 20 September 2025 10:43:04 +0000 (0:00:00.099) 0:00:18.001 **** 2025-09-20 10:43:07.798799 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.798809 | orchestrator | 2025-09-20 10:43:07.798820 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-20 10:43:07.798831 | orchestrator | Saturday 20 September 2025 10:43:04 +0000 (0:00:00.102) 0:00:18.104 **** 2025-09-20 10:43:07.798867 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 10:43:07.798879 | orchestrator |  "vgs_report": { 2025-09-20 10:43:07.798906 | orchestrator |  "vg": [] 2025-09-20 10:43:07.798918 | orchestrator |  } 2025-09-20 10:43:07.798929 | orchestrator | } 2025-09-20 10:43:07.798939 | orchestrator | 2025-09-20 10:43:07.798950 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-20 10:43:07.798961 | orchestrator | Saturday 20 September 2025 10:43:04 +0000 (0:00:00.111) 0:00:18.216 **** 2025-09-20 10:43:07.798972 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.798983 | orchestrator | 2025-09-20 10:43:07.798994 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-20 10:43:07.799005 | orchestrator | Saturday 20 September 2025 10:43:04 +0000 (0:00:00.121) 0:00:18.338 **** 2025-09-20 10:43:07.799016 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799027 | orchestrator | 2025-09-20 10:43:07.799038 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-20 10:43:07.799049 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.111) 0:00:18.450 **** 2025-09-20 10:43:07.799059 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799070 | orchestrator | 2025-09-20 10:43:07.799081 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-20 10:43:07.799092 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.267) 0:00:18.717 **** 2025-09-20 10:43:07.799102 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799113 | orchestrator | 2025-09-20 10:43:07.799124 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-20 10:43:07.799135 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.131) 0:00:18.848 **** 2025-09-20 10:43:07.799146 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799157 | orchestrator | 2025-09-20 10:43:07.799168 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-20 10:43:07.799179 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.123) 0:00:18.972 **** 2025-09-20 10:43:07.799189 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799200 | orchestrator | 2025-09-20 10:43:07.799211 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-20 10:43:07.799222 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.129) 0:00:19.101 **** 2025-09-20 10:43:07.799233 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799244 | orchestrator | 2025-09-20 10:43:07.799255 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-20 10:43:07.799266 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.135) 0:00:19.237 **** 2025-09-20 10:43:07.799277 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799287 | orchestrator | 2025-09-20 10:43:07.799298 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-20 10:43:07.799330 | orchestrator | Saturday 20 September 2025 10:43:05 +0000 (0:00:00.124) 0:00:19.361 **** 2025-09-20 10:43:07.799341 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799352 | orchestrator | 2025-09-20 10:43:07.799363 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-20 10:43:07.799374 | orchestrator | Saturday 20 September 2025 10:43:06 +0000 (0:00:00.131) 0:00:19.492 **** 2025-09-20 10:43:07.799415 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799435 | orchestrator | 2025-09-20 10:43:07.799453 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-20 10:43:07.799466 | orchestrator | Saturday 20 September 2025 10:43:06 +0000 (0:00:00.123) 0:00:19.616 **** 2025-09-20 10:43:07.799477 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799487 | orchestrator | 2025-09-20 10:43:07.799498 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-20 10:43:07.799509 | orchestrator | Saturday 20 September 2025 10:43:06 +0000 (0:00:00.118) 0:00:19.735 **** 2025-09-20 10:43:07.799520 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799531 | orchestrator | 2025-09-20 10:43:07.799554 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-20 10:43:07.799565 | orchestrator | Saturday 20 September 2025 10:43:06 +0000 (0:00:00.133) 0:00:19.868 **** 2025-09-20 10:43:07.799576 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799587 | orchestrator | 2025-09-20 10:43:07.799598 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-20 10:43:07.799608 | orchestrator | Saturday 20 September 2025 10:43:06 +0000 (0:00:00.126) 0:00:19.995 **** 2025-09-20 10:43:07.799619 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799630 | orchestrator | 2025-09-20 10:43:07.799641 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-20 10:43:07.799651 | orchestrator | Saturday 20 September 2025 10:43:06 +0000 (0:00:00.127) 0:00:20.122 **** 2025-09-20 10:43:07.799663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:07.799676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:07.799686 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799697 | orchestrator | 2025-09-20 10:43:07.799708 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-20 10:43:07.799719 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.295) 0:00:20.418 **** 2025-09-20 10:43:07.799729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:07.799740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:07.799751 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799762 | orchestrator | 2025-09-20 10:43:07.799772 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-20 10:43:07.799783 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.161) 0:00:20.579 **** 2025-09-20 10:43:07.799794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:07.799804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:07.799815 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799826 | orchestrator | 2025-09-20 10:43:07.799837 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-20 10:43:07.799847 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.125) 0:00:20.705 **** 2025-09-20 10:43:07.799858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:07.799869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:07.799880 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799890 | orchestrator | 2025-09-20 10:43:07.799901 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-20 10:43:07.799911 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.142) 0:00:20.848 **** 2025-09-20 10:43:07.799922 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:07.799933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:07.799943 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:07.799962 | orchestrator | 2025-09-20 10:43:07.799973 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-20 10:43:07.799983 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.140) 0:00:20.988 **** 2025-09-20 10:43:07.800002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:07.800020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:12.993147 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:12.993250 | orchestrator | 2025-09-20 10:43:12.993266 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-20 10:43:12.993278 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.164) 0:00:21.153 **** 2025-09-20 10:43:12.993289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:12.993300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:12.993311 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:12.993320 | orchestrator | 2025-09-20 10:43:12.993330 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-20 10:43:12.993340 | orchestrator | Saturday 20 September 2025 10:43:07 +0000 (0:00:00.134) 0:00:21.287 **** 2025-09-20 10:43:12.993350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:12.993360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:12.993370 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:12.993430 | orchestrator | 2025-09-20 10:43:12.993441 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-20 10:43:12.993451 | orchestrator | Saturday 20 September 2025 10:43:08 +0000 (0:00:00.144) 0:00:21.432 **** 2025-09-20 10:43:12.993461 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:12.993472 | orchestrator | 2025-09-20 10:43:12.993482 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-20 10:43:12.993492 | orchestrator | Saturday 20 September 2025 10:43:08 +0000 (0:00:00.499) 0:00:21.931 **** 2025-09-20 10:43:12.993501 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:12.993511 | orchestrator | 2025-09-20 10:43:12.993520 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-20 10:43:12.993530 | orchestrator | Saturday 20 September 2025 10:43:09 +0000 (0:00:00.482) 0:00:22.414 **** 2025-09-20 10:43:12.993540 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:43:12.993549 | orchestrator | 2025-09-20 10:43:12.993559 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-20 10:43:12.993569 | orchestrator | Saturday 20 September 2025 10:43:09 +0000 (0:00:00.126) 0:00:22.540 **** 2025-09-20 10:43:12.993580 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'vg_name': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'}) 2025-09-20 10:43:12.993591 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'vg_name': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'}) 2025-09-20 10:43:12.993600 | orchestrator | 2025-09-20 10:43:12.993629 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-20 10:43:12.993639 | orchestrator | Saturday 20 September 2025 10:43:09 +0000 (0:00:00.166) 0:00:22.707 **** 2025-09-20 10:43:12.993649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:12.993682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:12.993694 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:12.993705 | orchestrator | 2025-09-20 10:43:12.993716 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-20 10:43:12.993727 | orchestrator | Saturday 20 September 2025 10:43:09 +0000 (0:00:00.289) 0:00:22.996 **** 2025-09-20 10:43:12.993738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:12.993749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:12.993760 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:12.993771 | orchestrator | 2025-09-20 10:43:12.993782 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-20 10:43:12.993793 | orchestrator | Saturday 20 September 2025 10:43:09 +0000 (0:00:00.130) 0:00:23.127 **** 2025-09-20 10:43:12.993804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'})  2025-09-20 10:43:12.993815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'})  2025-09-20 10:43:12.993826 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:43:12.993837 | orchestrator | 2025-09-20 10:43:12.993847 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-20 10:43:12.993858 | orchestrator | Saturday 20 September 2025 10:43:09 +0000 (0:00:00.141) 0:00:23.268 **** 2025-09-20 10:43:12.993869 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 10:43:12.993880 | orchestrator |  "lvm_report": { 2025-09-20 10:43:12.993892 | orchestrator |  "lv": [ 2025-09-20 10:43:12.993903 | orchestrator |  { 2025-09-20 10:43:12.993931 | orchestrator |  "lv_name": "osd-block-44b8c0b1-de10-587f-a252-374190a68e04", 2025-09-20 10:43:12.993943 | orchestrator |  "vg_name": "ceph-44b8c0b1-de10-587f-a252-374190a68e04" 2025-09-20 10:43:12.993954 | orchestrator |  }, 2025-09-20 10:43:12.993966 | orchestrator |  { 2025-09-20 10:43:12.993977 | orchestrator |  "lv_name": "osd-block-8bfbaad6-401f-511d-91f2-acbf67028504", 2025-09-20 10:43:12.993988 | orchestrator |  "vg_name": "ceph-8bfbaad6-401f-511d-91f2-acbf67028504" 2025-09-20 10:43:12.993998 | orchestrator |  } 2025-09-20 10:43:12.994009 | orchestrator |  ], 2025-09-20 10:43:12.994072 | orchestrator |  "pv": [ 2025-09-20 10:43:12.994082 | orchestrator |  { 2025-09-20 10:43:12.994092 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-20 10:43:12.994102 | orchestrator |  "vg_name": "ceph-8bfbaad6-401f-511d-91f2-acbf67028504" 2025-09-20 10:43:12.994112 | orchestrator |  }, 2025-09-20 10:43:12.994122 | orchestrator |  { 2025-09-20 10:43:12.994131 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-20 10:43:12.994141 | orchestrator |  "vg_name": "ceph-44b8c0b1-de10-587f-a252-374190a68e04" 2025-09-20 10:43:12.994151 | orchestrator |  } 2025-09-20 10:43:12.994161 | orchestrator |  ] 2025-09-20 10:43:12.994170 | orchestrator |  } 2025-09-20 10:43:12.994180 | orchestrator | } 2025-09-20 10:43:12.994190 | orchestrator | 2025-09-20 10:43:12.994200 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-20 10:43:12.994210 | orchestrator | 2025-09-20 10:43:12.994219 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:43:12.994229 | orchestrator | Saturday 20 September 2025 10:43:10 +0000 (0:00:00.264) 0:00:23.533 **** 2025-09-20 10:43:12.994239 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-20 10:43:12.994255 | orchestrator | 2025-09-20 10:43:12.994266 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 10:43:12.994275 | orchestrator | Saturday 20 September 2025 10:43:10 +0000 (0:00:00.228) 0:00:23.761 **** 2025-09-20 10:43:12.994285 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:12.994294 | orchestrator | 2025-09-20 10:43:12.994304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994314 | orchestrator | Saturday 20 September 2025 10:43:10 +0000 (0:00:00.232) 0:00:23.993 **** 2025-09-20 10:43:12.994323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-20 10:43:12.994333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-20 10:43:12.994342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-20 10:43:12.994351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-20 10:43:12.994361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-20 10:43:12.994371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-20 10:43:12.994396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-20 10:43:12.994411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-20 10:43:12.994421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-20 10:43:12.994431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-20 10:43:12.994440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-20 10:43:12.994450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-20 10:43:12.994459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-20 10:43:12.994469 | orchestrator | 2025-09-20 10:43:12.994478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994488 | orchestrator | Saturday 20 September 2025 10:43:11 +0000 (0:00:00.379) 0:00:24.373 **** 2025-09-20 10:43:12.994498 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994507 | orchestrator | 2025-09-20 10:43:12.994517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994527 | orchestrator | Saturday 20 September 2025 10:43:11 +0000 (0:00:00.211) 0:00:24.585 **** 2025-09-20 10:43:12.994536 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994546 | orchestrator | 2025-09-20 10:43:12.994556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994565 | orchestrator | Saturday 20 September 2025 10:43:11 +0000 (0:00:00.226) 0:00:24.811 **** 2025-09-20 10:43:12.994575 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994585 | orchestrator | 2025-09-20 10:43:12.994595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994604 | orchestrator | Saturday 20 September 2025 10:43:12 +0000 (0:00:00.685) 0:00:25.496 **** 2025-09-20 10:43:12.994614 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994623 | orchestrator | 2025-09-20 10:43:12.994633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994643 | orchestrator | Saturday 20 September 2025 10:43:12 +0000 (0:00:00.199) 0:00:25.696 **** 2025-09-20 10:43:12.994652 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994662 | orchestrator | 2025-09-20 10:43:12.994671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994681 | orchestrator | Saturday 20 September 2025 10:43:12 +0000 (0:00:00.216) 0:00:25.913 **** 2025-09-20 10:43:12.994691 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994700 | orchestrator | 2025-09-20 10:43:12.994716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:12.994726 | orchestrator | Saturday 20 September 2025 10:43:12 +0000 (0:00:00.220) 0:00:26.133 **** 2025-09-20 10:43:12.994736 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:12.994746 | orchestrator | 2025-09-20 10:43:12.994762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:23.381972 | orchestrator | Saturday 20 September 2025 10:43:12 +0000 (0:00:00.213) 0:00:26.347 **** 2025-09-20 10:43:23.382147 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382167 | orchestrator | 2025-09-20 10:43:23.382178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:23.382189 | orchestrator | Saturday 20 September 2025 10:43:13 +0000 (0:00:00.216) 0:00:26.564 **** 2025-09-20 10:43:23.382200 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99) 2025-09-20 10:43:23.382211 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99) 2025-09-20 10:43:23.382221 | orchestrator | 2025-09-20 10:43:23.382231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:23.382241 | orchestrator | Saturday 20 September 2025 10:43:13 +0000 (0:00:00.428) 0:00:26.993 **** 2025-09-20 10:43:23.382251 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652) 2025-09-20 10:43:23.382261 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652) 2025-09-20 10:43:23.382270 | orchestrator | 2025-09-20 10:43:23.382280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:23.382290 | orchestrator | Saturday 20 September 2025 10:43:14 +0000 (0:00:00.460) 0:00:27.454 **** 2025-09-20 10:43:23.382300 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58) 2025-09-20 10:43:23.382309 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58) 2025-09-20 10:43:23.382319 | orchestrator | 2025-09-20 10:43:23.382329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:23.382339 | orchestrator | Saturday 20 September 2025 10:43:14 +0000 (0:00:00.441) 0:00:27.896 **** 2025-09-20 10:43:23.382348 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22) 2025-09-20 10:43:23.382358 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22) 2025-09-20 10:43:23.382368 | orchestrator | 2025-09-20 10:43:23.382410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:23.382421 | orchestrator | Saturday 20 September 2025 10:43:15 +0000 (0:00:00.488) 0:00:28.384 **** 2025-09-20 10:43:23.382431 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 10:43:23.382441 | orchestrator | 2025-09-20 10:43:23.382451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382460 | orchestrator | Saturday 20 September 2025 10:43:15 +0000 (0:00:00.325) 0:00:28.709 **** 2025-09-20 10:43:23.382472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-20 10:43:23.382483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-20 10:43:23.382494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-20 10:43:23.382505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-20 10:43:23.382516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-20 10:43:23.382527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-20 10:43:23.382556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-20 10:43:23.382592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-20 10:43:23.382604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-20 10:43:23.382614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-20 10:43:23.382625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-20 10:43:23.382636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-20 10:43:23.382646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-20 10:43:23.382657 | orchestrator | 2025-09-20 10:43:23.382667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382677 | orchestrator | Saturday 20 September 2025 10:43:16 +0000 (0:00:00.693) 0:00:29.402 **** 2025-09-20 10:43:23.382686 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382696 | orchestrator | 2025-09-20 10:43:23.382706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382715 | orchestrator | Saturday 20 September 2025 10:43:16 +0000 (0:00:00.194) 0:00:29.597 **** 2025-09-20 10:43:23.382725 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382735 | orchestrator | 2025-09-20 10:43:23.382744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382754 | orchestrator | Saturday 20 September 2025 10:43:16 +0000 (0:00:00.212) 0:00:29.809 **** 2025-09-20 10:43:23.382764 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382773 | orchestrator | 2025-09-20 10:43:23.382783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382793 | orchestrator | Saturday 20 September 2025 10:43:16 +0000 (0:00:00.200) 0:00:30.010 **** 2025-09-20 10:43:23.382802 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382812 | orchestrator | 2025-09-20 10:43:23.382840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382851 | orchestrator | Saturday 20 September 2025 10:43:16 +0000 (0:00:00.200) 0:00:30.211 **** 2025-09-20 10:43:23.382860 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382870 | orchestrator | 2025-09-20 10:43:23.382880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382889 | orchestrator | Saturday 20 September 2025 10:43:17 +0000 (0:00:00.209) 0:00:30.420 **** 2025-09-20 10:43:23.382899 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382908 | orchestrator | 2025-09-20 10:43:23.382918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382928 | orchestrator | Saturday 20 September 2025 10:43:17 +0000 (0:00:00.220) 0:00:30.641 **** 2025-09-20 10:43:23.382938 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382947 | orchestrator | 2025-09-20 10:43:23.382957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.382967 | orchestrator | Saturday 20 September 2025 10:43:17 +0000 (0:00:00.197) 0:00:30.838 **** 2025-09-20 10:43:23.382976 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.382986 | orchestrator | 2025-09-20 10:43:23.382995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.383005 | orchestrator | Saturday 20 September 2025 10:43:17 +0000 (0:00:00.207) 0:00:31.046 **** 2025-09-20 10:43:23.383015 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-20 10:43:23.383025 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-20 10:43:23.383035 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-20 10:43:23.383044 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-20 10:43:23.383054 | orchestrator | 2025-09-20 10:43:23.383064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.383074 | orchestrator | Saturday 20 September 2025 10:43:18 +0000 (0:00:00.858) 0:00:31.905 **** 2025-09-20 10:43:23.383091 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.383100 | orchestrator | 2025-09-20 10:43:23.383110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.383120 | orchestrator | Saturday 20 September 2025 10:43:18 +0000 (0:00:00.196) 0:00:32.101 **** 2025-09-20 10:43:23.383129 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.383139 | orchestrator | 2025-09-20 10:43:23.383148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.383158 | orchestrator | Saturday 20 September 2025 10:43:18 +0000 (0:00:00.181) 0:00:32.282 **** 2025-09-20 10:43:23.383168 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.383177 | orchestrator | 2025-09-20 10:43:23.383187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:23.383196 | orchestrator | Saturday 20 September 2025 10:43:19 +0000 (0:00:00.661) 0:00:32.944 **** 2025-09-20 10:43:23.383206 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.383216 | orchestrator | 2025-09-20 10:43:23.383225 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-20 10:43:23.383235 | orchestrator | Saturday 20 September 2025 10:43:19 +0000 (0:00:00.192) 0:00:33.137 **** 2025-09-20 10:43:23.383249 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.383259 | orchestrator | 2025-09-20 10:43:23.383269 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-20 10:43:23.383279 | orchestrator | Saturday 20 September 2025 10:43:19 +0000 (0:00:00.135) 0:00:33.272 **** 2025-09-20 10:43:23.383288 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}}) 2025-09-20 10:43:23.383299 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7feb156-b84d-561e-a62b-66fdb35e8084'}}) 2025-09-20 10:43:23.383308 | orchestrator | 2025-09-20 10:43:23.383318 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-20 10:43:23.383328 | orchestrator | Saturday 20 September 2025 10:43:20 +0000 (0:00:00.200) 0:00:33.473 **** 2025-09-20 10:43:23.383338 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}) 2025-09-20 10:43:23.383350 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'}) 2025-09-20 10:43:23.383360 | orchestrator | 2025-09-20 10:43:23.383369 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-20 10:43:23.383396 | orchestrator | Saturday 20 September 2025 10:43:21 +0000 (0:00:01.863) 0:00:35.336 **** 2025-09-20 10:43:23.383407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:23.383418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:23.383427 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:23.383437 | orchestrator | 2025-09-20 10:43:23.383446 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-20 10:43:23.383456 | orchestrator | Saturday 20 September 2025 10:43:22 +0000 (0:00:00.148) 0:00:35.485 **** 2025-09-20 10:43:23.383465 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}) 2025-09-20 10:43:23.383475 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'}) 2025-09-20 10:43:23.383484 | orchestrator | 2025-09-20 10:43:23.383500 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-20 10:43:29.918569 | orchestrator | Saturday 20 September 2025 10:43:23 +0000 (0:00:01.247) 0:00:36.732 **** 2025-09-20 10:43:29.918698 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.918716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.918728 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.918741 | orchestrator | 2025-09-20 10:43:29.918754 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-20 10:43:29.918765 | orchestrator | Saturday 20 September 2025 10:43:23 +0000 (0:00:00.156) 0:00:36.889 **** 2025-09-20 10:43:29.918776 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.918787 | orchestrator | 2025-09-20 10:43:29.918798 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-20 10:43:29.918809 | orchestrator | Saturday 20 September 2025 10:43:23 +0000 (0:00:00.134) 0:00:37.024 **** 2025-09-20 10:43:29.918821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.918832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.918842 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.918853 | orchestrator | 2025-09-20 10:43:29.918864 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-20 10:43:29.918875 | orchestrator | Saturday 20 September 2025 10:43:23 +0000 (0:00:00.160) 0:00:37.185 **** 2025-09-20 10:43:29.918886 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.918896 | orchestrator | 2025-09-20 10:43:29.918907 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-20 10:43:29.918918 | orchestrator | Saturday 20 September 2025 10:43:23 +0000 (0:00:00.142) 0:00:37.328 **** 2025-09-20 10:43:29.918929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.918940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.918951 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.918962 | orchestrator | 2025-09-20 10:43:29.918972 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-20 10:43:29.918983 | orchestrator | Saturday 20 September 2025 10:43:24 +0000 (0:00:00.193) 0:00:37.521 **** 2025-09-20 10:43:29.919009 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919021 | orchestrator | 2025-09-20 10:43:29.919032 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-20 10:43:29.919043 | orchestrator | Saturday 20 September 2025 10:43:24 +0000 (0:00:00.358) 0:00:37.880 **** 2025-09-20 10:43:29.919054 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.919065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.919076 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919087 | orchestrator | 2025-09-20 10:43:29.919099 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-20 10:43:29.919111 | orchestrator | Saturday 20 September 2025 10:43:24 +0000 (0:00:00.181) 0:00:38.061 **** 2025-09-20 10:43:29.919124 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:29.919136 | orchestrator | 2025-09-20 10:43:29.919148 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-20 10:43:29.919160 | orchestrator | Saturday 20 September 2025 10:43:24 +0000 (0:00:00.140) 0:00:38.202 **** 2025-09-20 10:43:29.919183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.919197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.919209 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919221 | orchestrator | 2025-09-20 10:43:29.919234 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-20 10:43:29.919246 | orchestrator | Saturday 20 September 2025 10:43:24 +0000 (0:00:00.161) 0:00:38.363 **** 2025-09-20 10:43:29.919257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.919269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.919281 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919292 | orchestrator | 2025-09-20 10:43:29.919304 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-20 10:43:29.919317 | orchestrator | Saturday 20 September 2025 10:43:25 +0000 (0:00:00.159) 0:00:38.523 **** 2025-09-20 10:43:29.919346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:29.919360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:29.919373 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919420 | orchestrator | 2025-09-20 10:43:29.919433 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-20 10:43:29.919445 | orchestrator | Saturday 20 September 2025 10:43:25 +0000 (0:00:00.165) 0:00:38.689 **** 2025-09-20 10:43:29.919456 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919467 | orchestrator | 2025-09-20 10:43:29.919478 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-20 10:43:29.919489 | orchestrator | Saturday 20 September 2025 10:43:25 +0000 (0:00:00.148) 0:00:38.838 **** 2025-09-20 10:43:29.919499 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919510 | orchestrator | 2025-09-20 10:43:29.919521 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-20 10:43:29.919532 | orchestrator | Saturday 20 September 2025 10:43:25 +0000 (0:00:00.171) 0:00:39.009 **** 2025-09-20 10:43:29.919542 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919553 | orchestrator | 2025-09-20 10:43:29.919564 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-20 10:43:29.919575 | orchestrator | Saturday 20 September 2025 10:43:25 +0000 (0:00:00.134) 0:00:39.144 **** 2025-09-20 10:43:29.919585 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 10:43:29.919596 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-20 10:43:29.919608 | orchestrator | } 2025-09-20 10:43:29.919619 | orchestrator | 2025-09-20 10:43:29.919630 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-20 10:43:29.919640 | orchestrator | Saturday 20 September 2025 10:43:25 +0000 (0:00:00.139) 0:00:39.284 **** 2025-09-20 10:43:29.919651 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 10:43:29.919662 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-20 10:43:29.919673 | orchestrator | } 2025-09-20 10:43:29.919684 | orchestrator | 2025-09-20 10:43:29.919695 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-20 10:43:29.919706 | orchestrator | Saturday 20 September 2025 10:43:26 +0000 (0:00:00.146) 0:00:39.430 **** 2025-09-20 10:43:29.919717 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 10:43:29.919728 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-20 10:43:29.919746 | orchestrator | } 2025-09-20 10:43:29.919757 | orchestrator | 2025-09-20 10:43:29.919768 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-20 10:43:29.919779 | orchestrator | Saturday 20 September 2025 10:43:26 +0000 (0:00:00.144) 0:00:39.574 **** 2025-09-20 10:43:29.919790 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:29.919801 | orchestrator | 2025-09-20 10:43:29.919812 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-20 10:43:29.919823 | orchestrator | Saturday 20 September 2025 10:43:26 +0000 (0:00:00.690) 0:00:40.264 **** 2025-09-20 10:43:29.919834 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:29.919844 | orchestrator | 2025-09-20 10:43:29.919855 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-20 10:43:29.919866 | orchestrator | Saturday 20 September 2025 10:43:28 +0000 (0:00:01.498) 0:00:41.762 **** 2025-09-20 10:43:29.919877 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:29.919888 | orchestrator | 2025-09-20 10:43:29.919899 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-20 10:43:29.919910 | orchestrator | Saturday 20 September 2025 10:43:28 +0000 (0:00:00.506) 0:00:42.269 **** 2025-09-20 10:43:29.919921 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:29.919932 | orchestrator | 2025-09-20 10:43:29.919942 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-20 10:43:29.919953 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.147) 0:00:42.416 **** 2025-09-20 10:43:29.919964 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.919975 | orchestrator | 2025-09-20 10:43:29.919986 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-20 10:43:29.919997 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.122) 0:00:42.538 **** 2025-09-20 10:43:29.920015 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.920027 | orchestrator | 2025-09-20 10:43:29.920038 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-20 10:43:29.920049 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.092) 0:00:42.631 **** 2025-09-20 10:43:29.920060 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 10:43:29.920071 | orchestrator |  "vgs_report": { 2025-09-20 10:43:29.920083 | orchestrator |  "vg": [] 2025-09-20 10:43:29.920094 | orchestrator |  } 2025-09-20 10:43:29.920105 | orchestrator | } 2025-09-20 10:43:29.920117 | orchestrator | 2025-09-20 10:43:29.920128 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-20 10:43:29.920139 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.119) 0:00:42.751 **** 2025-09-20 10:43:29.920150 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.920160 | orchestrator | 2025-09-20 10:43:29.920172 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-20 10:43:29.920182 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.116) 0:00:42.867 **** 2025-09-20 10:43:29.920193 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.920204 | orchestrator | 2025-09-20 10:43:29.920215 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-20 10:43:29.920226 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.124) 0:00:42.991 **** 2025-09-20 10:43:29.920237 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.920248 | orchestrator | 2025-09-20 10:43:29.920259 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-20 10:43:29.920270 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.154) 0:00:43.146 **** 2025-09-20 10:43:29.920281 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:29.920291 | orchestrator | 2025-09-20 10:43:29.920303 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-20 10:43:29.920320 | orchestrator | Saturday 20 September 2025 10:43:29 +0000 (0:00:00.127) 0:00:43.274 **** 2025-09-20 10:43:34.467777 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.467886 | orchestrator | 2025-09-20 10:43:34.467928 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-20 10:43:34.467942 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.138) 0:00:43.412 **** 2025-09-20 10:43:34.467954 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.467964 | orchestrator | 2025-09-20 10:43:34.467975 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-20 10:43:34.467987 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.296) 0:00:43.708 **** 2025-09-20 10:43:34.467997 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468008 | orchestrator | 2025-09-20 10:43:34.468019 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-20 10:43:34.468030 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.128) 0:00:43.837 **** 2025-09-20 10:43:34.468041 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468052 | orchestrator | 2025-09-20 10:43:34.468063 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-20 10:43:34.468074 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.123) 0:00:43.960 **** 2025-09-20 10:43:34.468084 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468095 | orchestrator | 2025-09-20 10:43:34.468106 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-20 10:43:34.468116 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.120) 0:00:44.081 **** 2025-09-20 10:43:34.468127 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468138 | orchestrator | 2025-09-20 10:43:34.468148 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-20 10:43:34.468159 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.146) 0:00:44.227 **** 2025-09-20 10:43:34.468170 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468181 | orchestrator | 2025-09-20 10:43:34.468191 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-20 10:43:34.468202 | orchestrator | Saturday 20 September 2025 10:43:30 +0000 (0:00:00.123) 0:00:44.351 **** 2025-09-20 10:43:34.468212 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468223 | orchestrator | 2025-09-20 10:43:34.468234 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-20 10:43:34.468245 | orchestrator | Saturday 20 September 2025 10:43:31 +0000 (0:00:00.131) 0:00:44.482 **** 2025-09-20 10:43:34.468255 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468266 | orchestrator | 2025-09-20 10:43:34.468277 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-20 10:43:34.468287 | orchestrator | Saturday 20 September 2025 10:43:31 +0000 (0:00:00.129) 0:00:44.612 **** 2025-09-20 10:43:34.468298 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468311 | orchestrator | 2025-09-20 10:43:34.468323 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-20 10:43:34.468335 | orchestrator | Saturday 20 September 2025 10:43:31 +0000 (0:00:00.142) 0:00:44.754 **** 2025-09-20 10:43:34.468363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468414 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468426 | orchestrator | 2025-09-20 10:43:34.468438 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-20 10:43:34.468450 | orchestrator | Saturday 20 September 2025 10:43:31 +0000 (0:00:00.154) 0:00:44.908 **** 2025-09-20 10:43:34.468462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468497 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468509 | orchestrator | 2025-09-20 10:43:34.468521 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-20 10:43:34.468533 | orchestrator | Saturday 20 September 2025 10:43:31 +0000 (0:00:00.167) 0:00:45.076 **** 2025-09-20 10:43:34.468545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468569 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468581 | orchestrator | 2025-09-20 10:43:34.468593 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-20 10:43:34.468605 | orchestrator | Saturday 20 September 2025 10:43:31 +0000 (0:00:00.157) 0:00:45.234 **** 2025-09-20 10:43:34.468617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468641 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468653 | orchestrator | 2025-09-20 10:43:34.468664 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-20 10:43:34.468693 | orchestrator | Saturday 20 September 2025 10:43:32 +0000 (0:00:00.302) 0:00:45.536 **** 2025-09-20 10:43:34.468705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468727 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468737 | orchestrator | 2025-09-20 10:43:34.468748 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-20 10:43:34.468759 | orchestrator | Saturday 20 September 2025 10:43:32 +0000 (0:00:00.164) 0:00:45.701 **** 2025-09-20 10:43:34.468769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468780 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468791 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468802 | orchestrator | 2025-09-20 10:43:34.468813 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-20 10:43:34.468824 | orchestrator | Saturday 20 September 2025 10:43:32 +0000 (0:00:00.162) 0:00:45.863 **** 2025-09-20 10:43:34.468834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468856 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468867 | orchestrator | 2025-09-20 10:43:34.468878 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-20 10:43:34.468888 | orchestrator | Saturday 20 September 2025 10:43:32 +0000 (0:00:00.166) 0:00:46.029 **** 2025-09-20 10:43:34.468899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.468917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.468928 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.468939 | orchestrator | 2025-09-20 10:43:34.468955 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-20 10:43:34.468966 | orchestrator | Saturday 20 September 2025 10:43:32 +0000 (0:00:00.134) 0:00:46.164 **** 2025-09-20 10:43:34.468977 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:34.468988 | orchestrator | 2025-09-20 10:43:34.468999 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-20 10:43:34.469010 | orchestrator | Saturday 20 September 2025 10:43:33 +0000 (0:00:00.505) 0:00:46.669 **** 2025-09-20 10:43:34.469020 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:34.469031 | orchestrator | 2025-09-20 10:43:34.469041 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-20 10:43:34.469052 | orchestrator | Saturday 20 September 2025 10:43:33 +0000 (0:00:00.504) 0:00:47.175 **** 2025-09-20 10:43:34.469063 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:43:34.469074 | orchestrator | 2025-09-20 10:43:34.469084 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-20 10:43:34.469095 | orchestrator | Saturday 20 September 2025 10:43:33 +0000 (0:00:00.148) 0:00:47.323 **** 2025-09-20 10:43:34.469106 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'vg_name': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}) 2025-09-20 10:43:34.469118 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'vg_name': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'}) 2025-09-20 10:43:34.469129 | orchestrator | 2025-09-20 10:43:34.469140 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-20 10:43:34.469150 | orchestrator | Saturday 20 September 2025 10:43:34 +0000 (0:00:00.173) 0:00:47.497 **** 2025-09-20 10:43:34.469161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.469172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.469183 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:34.469193 | orchestrator | 2025-09-20 10:43:34.469204 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-20 10:43:34.469215 | orchestrator | Saturday 20 September 2025 10:43:34 +0000 (0:00:00.163) 0:00:47.660 **** 2025-09-20 10:43:34.469225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:34.469236 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:34.469253 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:40.138788 | orchestrator | 2025-09-20 10:43:40.138871 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-20 10:43:40.138882 | orchestrator | Saturday 20 September 2025 10:43:34 +0000 (0:00:00.162) 0:00:47.822 **** 2025-09-20 10:43:40.138890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'})  2025-09-20 10:43:40.138898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'})  2025-09-20 10:43:40.138905 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:43:40.138912 | orchestrator | 2025-09-20 10:43:40.138919 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-20 10:43:40.138925 | orchestrator | Saturday 20 September 2025 10:43:34 +0000 (0:00:00.182) 0:00:48.004 **** 2025-09-20 10:43:40.138948 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 10:43:40.138955 | orchestrator |  "lvm_report": { 2025-09-20 10:43:40.138963 | orchestrator |  "lv": [ 2025-09-20 10:43:40.138970 | orchestrator |  { 2025-09-20 10:43:40.138976 | orchestrator |  "lv_name": "osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be", 2025-09-20 10:43:40.138984 | orchestrator |  "vg_name": "ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be" 2025-09-20 10:43:40.138990 | orchestrator |  }, 2025-09-20 10:43:40.138996 | orchestrator |  { 2025-09-20 10:43:40.139003 | orchestrator |  "lv_name": "osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084", 2025-09-20 10:43:40.139009 | orchestrator |  "vg_name": "ceph-d7feb156-b84d-561e-a62b-66fdb35e8084" 2025-09-20 10:43:40.139015 | orchestrator |  } 2025-09-20 10:43:40.139021 | orchestrator |  ], 2025-09-20 10:43:40.139027 | orchestrator |  "pv": [ 2025-09-20 10:43:40.139034 | orchestrator |  { 2025-09-20 10:43:40.139040 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-20 10:43:40.139046 | orchestrator |  "vg_name": "ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be" 2025-09-20 10:43:40.139052 | orchestrator |  }, 2025-09-20 10:43:40.139059 | orchestrator |  { 2025-09-20 10:43:40.139065 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-20 10:43:40.139071 | orchestrator |  "vg_name": "ceph-d7feb156-b84d-561e-a62b-66fdb35e8084" 2025-09-20 10:43:40.139078 | orchestrator |  } 2025-09-20 10:43:40.139084 | orchestrator |  ] 2025-09-20 10:43:40.139090 | orchestrator |  } 2025-09-20 10:43:40.139096 | orchestrator | } 2025-09-20 10:43:40.139103 | orchestrator | 2025-09-20 10:43:40.139109 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-20 10:43:40.139116 | orchestrator | 2025-09-20 10:43:40.139122 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:43:40.139128 | orchestrator | Saturday 20 September 2025 10:43:35 +0000 (0:00:00.490) 0:00:48.495 **** 2025-09-20 10:43:40.139134 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-20 10:43:40.139141 | orchestrator | 2025-09-20 10:43:40.139147 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 10:43:40.139153 | orchestrator | Saturday 20 September 2025 10:43:35 +0000 (0:00:00.262) 0:00:48.758 **** 2025-09-20 10:43:40.139160 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:40.139166 | orchestrator | 2025-09-20 10:43:40.139172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139179 | orchestrator | Saturday 20 September 2025 10:43:35 +0000 (0:00:00.224) 0:00:48.982 **** 2025-09-20 10:43:40.139185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-20 10:43:40.139191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-20 10:43:40.139198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-20 10:43:40.139204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-20 10:43:40.139210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-20 10:43:40.139216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-20 10:43:40.139222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-20 10:43:40.139228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-20 10:43:40.139235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-20 10:43:40.139241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-20 10:43:40.139247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-20 10:43:40.139257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-20 10:43:40.139264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-20 10:43:40.139270 | orchestrator | 2025-09-20 10:43:40.139276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139282 | orchestrator | Saturday 20 September 2025 10:43:36 +0000 (0:00:00.438) 0:00:49.421 **** 2025-09-20 10:43:40.139288 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139297 | orchestrator | 2025-09-20 10:43:40.139303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139310 | orchestrator | Saturday 20 September 2025 10:43:36 +0000 (0:00:00.190) 0:00:49.611 **** 2025-09-20 10:43:40.139316 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139322 | orchestrator | 2025-09-20 10:43:40.139328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139348 | orchestrator | Saturday 20 September 2025 10:43:36 +0000 (0:00:00.184) 0:00:49.796 **** 2025-09-20 10:43:40.139356 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139363 | orchestrator | 2025-09-20 10:43:40.139370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139402 | orchestrator | Saturday 20 September 2025 10:43:36 +0000 (0:00:00.186) 0:00:49.983 **** 2025-09-20 10:43:40.139410 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139417 | orchestrator | 2025-09-20 10:43:40.139424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139431 | orchestrator | Saturday 20 September 2025 10:43:36 +0000 (0:00:00.173) 0:00:50.156 **** 2025-09-20 10:43:40.139438 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139444 | orchestrator | 2025-09-20 10:43:40.139488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139496 | orchestrator | Saturday 20 September 2025 10:43:36 +0000 (0:00:00.178) 0:00:50.334 **** 2025-09-20 10:43:40.139503 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139510 | orchestrator | 2025-09-20 10:43:40.139517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139523 | orchestrator | Saturday 20 September 2025 10:43:37 +0000 (0:00:00.547) 0:00:50.882 **** 2025-09-20 10:43:40.139529 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139535 | orchestrator | 2025-09-20 10:43:40.139541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139548 | orchestrator | Saturday 20 September 2025 10:43:37 +0000 (0:00:00.174) 0:00:51.057 **** 2025-09-20 10:43:40.139554 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:40.139560 | orchestrator | 2025-09-20 10:43:40.139566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139572 | orchestrator | Saturday 20 September 2025 10:43:37 +0000 (0:00:00.205) 0:00:51.262 **** 2025-09-20 10:43:40.139578 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d) 2025-09-20 10:43:40.139586 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d) 2025-09-20 10:43:40.139592 | orchestrator | 2025-09-20 10:43:40.139598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139604 | orchestrator | Saturday 20 September 2025 10:43:38 +0000 (0:00:00.383) 0:00:51.646 **** 2025-09-20 10:43:40.139611 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b) 2025-09-20 10:43:40.139617 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b) 2025-09-20 10:43:40.139623 | orchestrator | 2025-09-20 10:43:40.139629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139636 | orchestrator | Saturday 20 September 2025 10:43:38 +0000 (0:00:00.385) 0:00:52.031 **** 2025-09-20 10:43:40.139650 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8) 2025-09-20 10:43:40.139656 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8) 2025-09-20 10:43:40.139663 | orchestrator | 2025-09-20 10:43:40.139669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139675 | orchestrator | Saturday 20 September 2025 10:43:39 +0000 (0:00:00.404) 0:00:52.436 **** 2025-09-20 10:43:40.139681 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b) 2025-09-20 10:43:40.139688 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b) 2025-09-20 10:43:40.139694 | orchestrator | 2025-09-20 10:43:40.139700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 10:43:40.139706 | orchestrator | Saturday 20 September 2025 10:43:39 +0000 (0:00:00.394) 0:00:52.830 **** 2025-09-20 10:43:40.139712 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 10:43:40.139719 | orchestrator | 2025-09-20 10:43:40.139725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:40.139731 | orchestrator | Saturday 20 September 2025 10:43:39 +0000 (0:00:00.300) 0:00:53.131 **** 2025-09-20 10:43:40.139737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-20 10:43:40.139743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-20 10:43:40.139750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-20 10:43:40.139756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-20 10:43:40.139762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-20 10:43:40.139768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-20 10:43:40.139774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-20 10:43:40.139780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-20 10:43:40.139786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-20 10:43:40.139793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-20 10:43:40.139799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-20 10:43:40.139810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-20 10:43:48.587455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-20 10:43:48.587565 | orchestrator | 2025-09-20 10:43:48.587582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587595 | orchestrator | Saturday 20 September 2025 10:43:40 +0000 (0:00:00.360) 0:00:53.491 **** 2025-09-20 10:43:48.587607 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587619 | orchestrator | 2025-09-20 10:43:48.587631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587642 | orchestrator | Saturday 20 September 2025 10:43:40 +0000 (0:00:00.178) 0:00:53.670 **** 2025-09-20 10:43:48.587653 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587664 | orchestrator | 2025-09-20 10:43:48.587675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587686 | orchestrator | Saturday 20 September 2025 10:43:40 +0000 (0:00:00.183) 0:00:53.853 **** 2025-09-20 10:43:48.587698 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587708 | orchestrator | 2025-09-20 10:43:48.587719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587754 | orchestrator | Saturday 20 September 2025 10:43:41 +0000 (0:00:00.524) 0:00:54.378 **** 2025-09-20 10:43:48.587766 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587776 | orchestrator | 2025-09-20 10:43:48.587787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587798 | orchestrator | Saturday 20 September 2025 10:43:41 +0000 (0:00:00.182) 0:00:54.560 **** 2025-09-20 10:43:48.587809 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587820 | orchestrator | 2025-09-20 10:43:48.587831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587842 | orchestrator | Saturday 20 September 2025 10:43:41 +0000 (0:00:00.191) 0:00:54.752 **** 2025-09-20 10:43:48.587853 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587864 | orchestrator | 2025-09-20 10:43:48.587875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587886 | orchestrator | Saturday 20 September 2025 10:43:41 +0000 (0:00:00.191) 0:00:54.943 **** 2025-09-20 10:43:48.587896 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587907 | orchestrator | 2025-09-20 10:43:48.587918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587929 | orchestrator | Saturday 20 September 2025 10:43:41 +0000 (0:00:00.208) 0:00:55.152 **** 2025-09-20 10:43:48.587940 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.587953 | orchestrator | 2025-09-20 10:43:48.587965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.587977 | orchestrator | Saturday 20 September 2025 10:43:41 +0000 (0:00:00.187) 0:00:55.339 **** 2025-09-20 10:43:48.587989 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-20 10:43:48.588002 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-20 10:43:48.588042 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-20 10:43:48.588056 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-20 10:43:48.588068 | orchestrator | 2025-09-20 10:43:48.588081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.588094 | orchestrator | Saturday 20 September 2025 10:43:42 +0000 (0:00:00.608) 0:00:55.948 **** 2025-09-20 10:43:48.588105 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588117 | orchestrator | 2025-09-20 10:43:48.588129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.588142 | orchestrator | Saturday 20 September 2025 10:43:42 +0000 (0:00:00.175) 0:00:56.124 **** 2025-09-20 10:43:48.588154 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588166 | orchestrator | 2025-09-20 10:43:48.588179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.588191 | orchestrator | Saturday 20 September 2025 10:43:42 +0000 (0:00:00.185) 0:00:56.309 **** 2025-09-20 10:43:48.588204 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588216 | orchestrator | 2025-09-20 10:43:48.588228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 10:43:48.588240 | orchestrator | Saturday 20 September 2025 10:43:43 +0000 (0:00:00.187) 0:00:56.497 **** 2025-09-20 10:43:48.588252 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588264 | orchestrator | 2025-09-20 10:43:48.588276 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-20 10:43:48.588288 | orchestrator | Saturday 20 September 2025 10:43:43 +0000 (0:00:00.176) 0:00:56.673 **** 2025-09-20 10:43:48.588300 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588311 | orchestrator | 2025-09-20 10:43:48.588322 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-20 10:43:48.588333 | orchestrator | Saturday 20 September 2025 10:43:43 +0000 (0:00:00.252) 0:00:56.926 **** 2025-09-20 10:43:48.588344 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43c75cb2-27fe-5978-b049-f1a35c211e19'}}) 2025-09-20 10:43:48.588356 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}}) 2025-09-20 10:43:48.588376 | orchestrator | 2025-09-20 10:43:48.588406 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-20 10:43:48.588418 | orchestrator | Saturday 20 September 2025 10:43:43 +0000 (0:00:00.169) 0:00:57.095 **** 2025-09-20 10:43:48.588429 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'}) 2025-09-20 10:43:48.588442 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}) 2025-09-20 10:43:48.588453 | orchestrator | 2025-09-20 10:43:48.588464 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-20 10:43:48.588493 | orchestrator | Saturday 20 September 2025 10:43:45 +0000 (0:00:01.813) 0:00:58.908 **** 2025-09-20 10:43:48.588506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:48.588518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:48.588529 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588539 | orchestrator | 2025-09-20 10:43:48.588550 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-20 10:43:48.588561 | orchestrator | Saturday 20 September 2025 10:43:45 +0000 (0:00:00.151) 0:00:59.060 **** 2025-09-20 10:43:48.588572 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'}) 2025-09-20 10:43:48.588583 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}) 2025-09-20 10:43:48.588594 | orchestrator | 2025-09-20 10:43:48.588605 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-20 10:43:48.588616 | orchestrator | Saturday 20 September 2025 10:43:46 +0000 (0:00:01.288) 0:01:00.348 **** 2025-09-20 10:43:48.588627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:48.588638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:48.588649 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588660 | orchestrator | 2025-09-20 10:43:48.588671 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-20 10:43:48.588681 | orchestrator | Saturday 20 September 2025 10:43:47 +0000 (0:00:00.155) 0:01:00.504 **** 2025-09-20 10:43:48.588692 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588703 | orchestrator | 2025-09-20 10:43:48.588713 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-20 10:43:48.588724 | orchestrator | Saturday 20 September 2025 10:43:47 +0000 (0:00:00.142) 0:01:00.646 **** 2025-09-20 10:43:48.588735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:48.588751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:48.588762 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588773 | orchestrator | 2025-09-20 10:43:48.588784 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-20 10:43:48.588795 | orchestrator | Saturday 20 September 2025 10:43:47 +0000 (0:00:00.153) 0:01:00.800 **** 2025-09-20 10:43:48.588805 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588823 | orchestrator | 2025-09-20 10:43:48.588834 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-20 10:43:48.588845 | orchestrator | Saturday 20 September 2025 10:43:47 +0000 (0:00:00.148) 0:01:00.948 **** 2025-09-20 10:43:48.588855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:48.588866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:48.588877 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588888 | orchestrator | 2025-09-20 10:43:48.588899 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-20 10:43:48.588910 | orchestrator | Saturday 20 September 2025 10:43:47 +0000 (0:00:00.160) 0:01:01.109 **** 2025-09-20 10:43:48.588921 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588931 | orchestrator | 2025-09-20 10:43:48.588942 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-20 10:43:48.588953 | orchestrator | Saturday 20 September 2025 10:43:47 +0000 (0:00:00.154) 0:01:01.264 **** 2025-09-20 10:43:48.588964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:48.588974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:48.588985 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:48.588996 | orchestrator | 2025-09-20 10:43:48.589007 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-20 10:43:48.589018 | orchestrator | Saturday 20 September 2025 10:43:48 +0000 (0:00:00.160) 0:01:01.424 **** 2025-09-20 10:43:48.589029 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:48.589040 | orchestrator | 2025-09-20 10:43:48.589051 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-20 10:43:48.589062 | orchestrator | Saturday 20 September 2025 10:43:48 +0000 (0:00:00.358) 0:01:01.783 **** 2025-09-20 10:43:48.589079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:54.043673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:54.043802 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.043821 | orchestrator | 2025-09-20 10:43:54.043867 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-20 10:43:54.043883 | orchestrator | Saturday 20 September 2025 10:43:48 +0000 (0:00:00.160) 0:01:01.943 **** 2025-09-20 10:43:54.043895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:54.043906 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:54.043917 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.043929 | orchestrator | 2025-09-20 10:43:54.043941 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-20 10:43:54.043952 | orchestrator | Saturday 20 September 2025 10:43:48 +0000 (0:00:00.154) 0:01:02.098 **** 2025-09-20 10:43:54.043963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:54.043974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:54.043985 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044017 | orchestrator | 2025-09-20 10:43:54.044029 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-20 10:43:54.044040 | orchestrator | Saturday 20 September 2025 10:43:48 +0000 (0:00:00.146) 0:01:02.245 **** 2025-09-20 10:43:54.044051 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044061 | orchestrator | 2025-09-20 10:43:54.044072 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-20 10:43:54.044083 | orchestrator | Saturday 20 September 2025 10:43:48 +0000 (0:00:00.114) 0:01:02.359 **** 2025-09-20 10:43:54.044093 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044104 | orchestrator | 2025-09-20 10:43:54.044115 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-20 10:43:54.044125 | orchestrator | Saturday 20 September 2025 10:43:49 +0000 (0:00:00.135) 0:01:02.495 **** 2025-09-20 10:43:54.044136 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044147 | orchestrator | 2025-09-20 10:43:54.044157 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-20 10:43:54.044168 | orchestrator | Saturday 20 September 2025 10:43:49 +0000 (0:00:00.121) 0:01:02.617 **** 2025-09-20 10:43:54.044179 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 10:43:54.044191 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-20 10:43:54.044202 | orchestrator | } 2025-09-20 10:43:54.044213 | orchestrator | 2025-09-20 10:43:54.044224 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-20 10:43:54.044234 | orchestrator | Saturday 20 September 2025 10:43:49 +0000 (0:00:00.126) 0:01:02.743 **** 2025-09-20 10:43:54.044245 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 10:43:54.044256 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-20 10:43:54.044267 | orchestrator | } 2025-09-20 10:43:54.044278 | orchestrator | 2025-09-20 10:43:54.044288 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-20 10:43:54.044300 | orchestrator | Saturday 20 September 2025 10:43:49 +0000 (0:00:00.127) 0:01:02.870 **** 2025-09-20 10:43:54.044311 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 10:43:54.044322 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-20 10:43:54.044333 | orchestrator | } 2025-09-20 10:43:54.044344 | orchestrator | 2025-09-20 10:43:54.044355 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-20 10:43:54.044365 | orchestrator | Saturday 20 September 2025 10:43:49 +0000 (0:00:00.135) 0:01:03.005 **** 2025-09-20 10:43:54.044376 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:54.044411 | orchestrator | 2025-09-20 10:43:54.044422 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-20 10:43:54.044433 | orchestrator | Saturday 20 September 2025 10:43:50 +0000 (0:00:00.465) 0:01:03.470 **** 2025-09-20 10:43:54.044450 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:54.044468 | orchestrator | 2025-09-20 10:43:54.044488 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-20 10:43:54.044505 | orchestrator | Saturday 20 September 2025 10:43:50 +0000 (0:00:00.472) 0:01:03.943 **** 2025-09-20 10:43:54.044522 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:54.044538 | orchestrator | 2025-09-20 10:43:54.044555 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-20 10:43:54.044572 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.603) 0:01:04.546 **** 2025-09-20 10:43:54.044588 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:54.044605 | orchestrator | 2025-09-20 10:43:54.044622 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-20 10:43:54.044640 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.166) 0:01:04.712 **** 2025-09-20 10:43:54.044658 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044676 | orchestrator | 2025-09-20 10:43:54.044693 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-20 10:43:54.044711 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.122) 0:01:04.835 **** 2025-09-20 10:43:54.044745 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044764 | orchestrator | 2025-09-20 10:43:54.044783 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-20 10:43:54.044802 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.108) 0:01:04.944 **** 2025-09-20 10:43:54.044822 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 10:43:54.044865 | orchestrator |  "vgs_report": { 2025-09-20 10:43:54.044887 | orchestrator |  "vg": [] 2025-09-20 10:43:54.044920 | orchestrator |  } 2025-09-20 10:43:54.044933 | orchestrator | } 2025-09-20 10:43:54.044944 | orchestrator | 2025-09-20 10:43:54.044955 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-20 10:43:54.044967 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.122) 0:01:05.066 **** 2025-09-20 10:43:54.044977 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.044988 | orchestrator | 2025-09-20 10:43:54.044999 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-20 10:43:54.045010 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.126) 0:01:05.192 **** 2025-09-20 10:43:54.045021 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045032 | orchestrator | 2025-09-20 10:43:54.045043 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-20 10:43:54.045054 | orchestrator | Saturday 20 September 2025 10:43:51 +0000 (0:00:00.127) 0:01:05.319 **** 2025-09-20 10:43:54.045065 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045075 | orchestrator | 2025-09-20 10:43:54.045087 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-20 10:43:54.045098 | orchestrator | Saturday 20 September 2025 10:43:52 +0000 (0:00:00.133) 0:01:05.453 **** 2025-09-20 10:43:54.045108 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045119 | orchestrator | 2025-09-20 10:43:54.045130 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-20 10:43:54.045141 | orchestrator | Saturday 20 September 2025 10:43:52 +0000 (0:00:00.120) 0:01:05.574 **** 2025-09-20 10:43:54.045152 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045163 | orchestrator | 2025-09-20 10:43:54.045174 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-20 10:43:54.045185 | orchestrator | Saturday 20 September 2025 10:43:52 +0000 (0:00:00.128) 0:01:05.703 **** 2025-09-20 10:43:54.045196 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045207 | orchestrator | 2025-09-20 10:43:54.045218 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-20 10:43:54.045229 | orchestrator | Saturday 20 September 2025 10:43:52 +0000 (0:00:00.140) 0:01:05.843 **** 2025-09-20 10:43:54.045240 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045251 | orchestrator | 2025-09-20 10:43:54.045262 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-20 10:43:54.045273 | orchestrator | Saturday 20 September 2025 10:43:52 +0000 (0:00:00.136) 0:01:05.980 **** 2025-09-20 10:43:54.045284 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045295 | orchestrator | 2025-09-20 10:43:54.045306 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-20 10:43:54.045317 | orchestrator | Saturday 20 September 2025 10:43:52 +0000 (0:00:00.117) 0:01:06.097 **** 2025-09-20 10:43:54.045328 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045338 | orchestrator | 2025-09-20 10:43:54.045349 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-20 10:43:54.045366 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.277) 0:01:06.375 **** 2025-09-20 10:43:54.045377 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045417 | orchestrator | 2025-09-20 10:43:54.045428 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-20 10:43:54.045439 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.133) 0:01:06.509 **** 2025-09-20 10:43:54.045450 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045471 | orchestrator | 2025-09-20 10:43:54.045482 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-20 10:43:54.045494 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.111) 0:01:06.620 **** 2025-09-20 10:43:54.045505 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045516 | orchestrator | 2025-09-20 10:43:54.045528 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-20 10:43:54.045539 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.129) 0:01:06.750 **** 2025-09-20 10:43:54.045550 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045561 | orchestrator | 2025-09-20 10:43:54.045572 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-20 10:43:54.045583 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.116) 0:01:06.866 **** 2025-09-20 10:43:54.045594 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045605 | orchestrator | 2025-09-20 10:43:54.045616 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-20 10:43:54.045627 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.106) 0:01:06.972 **** 2025-09-20 10:43:54.045638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:54.045649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:54.045660 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045671 | orchestrator | 2025-09-20 10:43:54.045682 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-20 10:43:54.045693 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.149) 0:01:07.121 **** 2025-09-20 10:43:54.045704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:54.045715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:54.045726 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:54.045737 | orchestrator | 2025-09-20 10:43:54.045748 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-20 10:43:54.045759 | orchestrator | Saturday 20 September 2025 10:43:53 +0000 (0:00:00.136) 0:01:07.258 **** 2025-09-20 10:43:54.045778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943055 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943165 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943182 | orchestrator | 2025-09-20 10:43:56.943195 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-20 10:43:56.943208 | orchestrator | Saturday 20 September 2025 10:43:54 +0000 (0:00:00.142) 0:01:07.401 **** 2025-09-20 10:43:56.943220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943231 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943242 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943253 | orchestrator | 2025-09-20 10:43:56.943265 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-20 10:43:56.943276 | orchestrator | Saturday 20 September 2025 10:43:54 +0000 (0:00:00.142) 0:01:07.544 **** 2025-09-20 10:43:56.943287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943335 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943346 | orchestrator | 2025-09-20 10:43:56.943357 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-20 10:43:56.943368 | orchestrator | Saturday 20 September 2025 10:43:54 +0000 (0:00:00.131) 0:01:07.675 **** 2025-09-20 10:43:56.943378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943447 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943458 | orchestrator | 2025-09-20 10:43:56.943483 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-20 10:43:56.943494 | orchestrator | Saturday 20 September 2025 10:43:54 +0000 (0:00:00.128) 0:01:07.803 **** 2025-09-20 10:43:56.943505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943527 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943538 | orchestrator | 2025-09-20 10:43:56.943548 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-20 10:43:56.943560 | orchestrator | Saturday 20 September 2025 10:43:54 +0000 (0:00:00.371) 0:01:08.175 **** 2025-09-20 10:43:56.943572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943597 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943609 | orchestrator | 2025-09-20 10:43:56.943621 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-20 10:43:56.943632 | orchestrator | Saturday 20 September 2025 10:43:55 +0000 (0:00:00.200) 0:01:08.375 **** 2025-09-20 10:43:56.943645 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:56.943658 | orchestrator | 2025-09-20 10:43:56.943670 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-20 10:43:56.943682 | orchestrator | Saturday 20 September 2025 10:43:55 +0000 (0:00:00.491) 0:01:08.867 **** 2025-09-20 10:43:56.943694 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:56.943706 | orchestrator | 2025-09-20 10:43:56.943718 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-20 10:43:56.943730 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.500) 0:01:09.367 **** 2025-09-20 10:43:56.943742 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:43:56.943754 | orchestrator | 2025-09-20 10:43:56.943766 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-20 10:43:56.943779 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.149) 0:01:09.517 **** 2025-09-20 10:43:56.943791 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'vg_name': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'}) 2025-09-20 10:43:56.943804 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'vg_name': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}) 2025-09-20 10:43:56.943816 | orchestrator | 2025-09-20 10:43:56.943828 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-20 10:43:56.943851 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.181) 0:01:09.699 **** 2025-09-20 10:43:56.943881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943907 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943919 | orchestrator | 2025-09-20 10:43:56.943930 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-20 10:43:56.943941 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.146) 0:01:09.846 **** 2025-09-20 10:43:56.943952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.943963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.943975 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.943986 | orchestrator | 2025-09-20 10:43:56.943997 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-20 10:43:56.944008 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.152) 0:01:09.999 **** 2025-09-20 10:43:56.944019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'})  2025-09-20 10:43:56.944030 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'})  2025-09-20 10:43:56.944041 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:43:56.944051 | orchestrator | 2025-09-20 10:43:56.944062 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-20 10:43:56.944074 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.135) 0:01:10.135 **** 2025-09-20 10:43:56.944084 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 10:43:56.944095 | orchestrator |  "lvm_report": { 2025-09-20 10:43:56.944107 | orchestrator |  "lv": [ 2025-09-20 10:43:56.944118 | orchestrator |  { 2025-09-20 10:43:56.944130 | orchestrator |  "lv_name": "osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19", 2025-09-20 10:43:56.944146 | orchestrator |  "vg_name": "ceph-43c75cb2-27fe-5978-b049-f1a35c211e19" 2025-09-20 10:43:56.944157 | orchestrator |  }, 2025-09-20 10:43:56.944168 | orchestrator |  { 2025-09-20 10:43:56.944180 | orchestrator |  "lv_name": "osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d", 2025-09-20 10:43:56.944191 | orchestrator |  "vg_name": "ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d" 2025-09-20 10:43:56.944201 | orchestrator |  } 2025-09-20 10:43:56.944212 | orchestrator |  ], 2025-09-20 10:43:56.944223 | orchestrator |  "pv": [ 2025-09-20 10:43:56.944234 | orchestrator |  { 2025-09-20 10:43:56.944245 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-20 10:43:56.944256 | orchestrator |  "vg_name": "ceph-43c75cb2-27fe-5978-b049-f1a35c211e19" 2025-09-20 10:43:56.944267 | orchestrator |  }, 2025-09-20 10:43:56.944278 | orchestrator |  { 2025-09-20 10:43:56.944289 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-20 10:43:56.944300 | orchestrator |  "vg_name": "ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d" 2025-09-20 10:43:56.944311 | orchestrator |  } 2025-09-20 10:43:56.944322 | orchestrator |  ] 2025-09-20 10:43:56.944333 | orchestrator |  } 2025-09-20 10:43:56.944344 | orchestrator | } 2025-09-20 10:43:56.944355 | orchestrator | 2025-09-20 10:43:56.944366 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:43:56.944410 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-20 10:43:56.944423 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-20 10:43:56.944434 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-20 10:43:56.944445 | orchestrator | 2025-09-20 10:43:56.944456 | orchestrator | 2025-09-20 10:43:56.944467 | orchestrator | 2025-09-20 10:43:56.944478 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:43:56.944489 | orchestrator | Saturday 20 September 2025 10:43:56 +0000 (0:00:00.139) 0:01:10.274 **** 2025-09-20 10:43:56.944500 | orchestrator | =============================================================================== 2025-09-20 10:43:56.944510 | orchestrator | Create block VGs -------------------------------------------------------- 5.62s 2025-09-20 10:43:56.944521 | orchestrator | Create block LVs -------------------------------------------------------- 3.98s 2025-09-20 10:43:56.944532 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 2.46s 2025-09-20 10:43:56.944543 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.79s 2025-09-20 10:43:56.944554 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.64s 2025-09-20 10:43:56.944565 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-09-20 10:43:56.944575 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2025-09-20 10:43:56.944586 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2025-09-20 10:43:56.944604 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2025-09-20 10:43:57.216995 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-09-20 10:43:57.217084 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2025-09-20 10:43:57.217096 | orchestrator | Print LVM report data --------------------------------------------------- 0.90s 2025-09-20 10:43:57.217105 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-09-20 10:43:57.217114 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2025-09-20 10:43:57.217123 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-20 10:43:57.217132 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-09-20 10:43:57.217140 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-09-20 10:43:57.217149 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-09-20 10:43:57.217157 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-09-20 10:43:57.217166 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.65s 2025-09-20 10:44:09.260465 | orchestrator | 2025-09-20 10:44:09 | INFO  | Task b94a6d02-fcb6-42bf-b7d7-8745ca8ddab9 (facts) was prepared for execution. 2025-09-20 10:44:09.260576 | orchestrator | 2025-09-20 10:44:09 | INFO  | It takes a moment until task b94a6d02-fcb6-42bf-b7d7-8745ca8ddab9 (facts) has been started and output is visible here. 2025-09-20 10:44:21.592528 | orchestrator | 2025-09-20 10:44:21.592666 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-20 10:44:21.592685 | orchestrator | 2025-09-20 10:44:21.592698 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 10:44:21.592710 | orchestrator | Saturday 20 September 2025 10:44:13 +0000 (0:00:00.270) 0:00:00.270 **** 2025-09-20 10:44:21.592722 | orchestrator | ok: [testbed-manager] 2025-09-20 10:44:21.592735 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:44:21.592775 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:44:21.592786 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:44:21.592796 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:44:21.592807 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:44:21.592817 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:44:21.592828 | orchestrator | 2025-09-20 10:44:21.592839 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 10:44:21.592850 | orchestrator | Saturday 20 September 2025 10:44:14 +0000 (0:00:00.921) 0:00:01.192 **** 2025-09-20 10:44:21.592861 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:44:21.592873 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:44:21.592884 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:44:21.592895 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:44:21.592906 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:44:21.592916 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:44:21.592927 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:44:21.592938 | orchestrator | 2025-09-20 10:44:21.592949 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 10:44:21.592959 | orchestrator | 2025-09-20 10:44:21.592970 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:44:21.592981 | orchestrator | Saturday 20 September 2025 10:44:15 +0000 (0:00:01.099) 0:00:02.291 **** 2025-09-20 10:44:21.592992 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:44:21.593002 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:44:21.593014 | orchestrator | ok: [testbed-manager] 2025-09-20 10:44:21.593025 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:44:21.593037 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:44:21.593049 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:44:21.593061 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:44:21.593073 | orchestrator | 2025-09-20 10:44:21.593085 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 10:44:21.593096 | orchestrator | 2025-09-20 10:44:21.593108 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 10:44:21.593120 | orchestrator | Saturday 20 September 2025 10:44:20 +0000 (0:00:05.483) 0:00:07.775 **** 2025-09-20 10:44:21.593132 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:44:21.593145 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:44:21.593156 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:44:21.593169 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:44:21.593180 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:44:21.593192 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:44:21.593204 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:44:21.593215 | orchestrator | 2025-09-20 10:44:21.593227 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:44:21.593240 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593253 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593266 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593278 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593291 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593303 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593315 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:44:21.593335 | orchestrator | 2025-09-20 10:44:21.593348 | orchestrator | 2025-09-20 10:44:21.593360 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:44:21.593373 | orchestrator | Saturday 20 September 2025 10:44:21 +0000 (0:00:00.485) 0:00:08.260 **** 2025-09-20 10:44:21.593384 | orchestrator | =============================================================================== 2025-09-20 10:44:21.593394 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.48s 2025-09-20 10:44:21.593422 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-09-20 10:44:21.593433 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-09-20 10:44:21.593444 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-20 10:44:33.623306 | orchestrator | 2025-09-20 10:44:33 | INFO  | Task cd0bea0a-106b-47bf-b915-a782f6e3a319 (frr) was prepared for execution. 2025-09-20 10:44:33.623479 | orchestrator | 2025-09-20 10:44:33 | INFO  | It takes a moment until task cd0bea0a-106b-47bf-b915-a782f6e3a319 (frr) has been started and output is visible here. 2025-09-20 10:44:59.584845 | orchestrator | 2025-09-20 10:44:59.584955 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-20 10:44:59.584970 | orchestrator | 2025-09-20 10:44:59.584982 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-20 10:44:59.584993 | orchestrator | Saturday 20 September 2025 10:44:37 +0000 (0:00:00.234) 0:00:00.234 **** 2025-09-20 10:44:59.585021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 10:44:59.585033 | orchestrator | 2025-09-20 10:44:59.585044 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-20 10:44:59.585054 | orchestrator | Saturday 20 September 2025 10:44:37 +0000 (0:00:00.222) 0:00:00.457 **** 2025-09-20 10:44:59.585064 | orchestrator | changed: [testbed-manager] 2025-09-20 10:44:59.585075 | orchestrator | 2025-09-20 10:44:59.585085 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-20 10:44:59.585095 | orchestrator | Saturday 20 September 2025 10:44:39 +0000 (0:00:01.168) 0:00:01.626 **** 2025-09-20 10:44:59.585105 | orchestrator | changed: [testbed-manager] 2025-09-20 10:44:59.585115 | orchestrator | 2025-09-20 10:44:59.585131 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-20 10:44:59.585141 | orchestrator | Saturday 20 September 2025 10:44:48 +0000 (0:00:09.877) 0:00:11.503 **** 2025-09-20 10:44:59.585151 | orchestrator | ok: [testbed-manager] 2025-09-20 10:44:59.585162 | orchestrator | 2025-09-20 10:44:59.585172 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-20 10:44:59.585182 | orchestrator | Saturday 20 September 2025 10:44:50 +0000 (0:00:01.278) 0:00:12.781 **** 2025-09-20 10:44:59.585192 | orchestrator | changed: [testbed-manager] 2025-09-20 10:44:59.585201 | orchestrator | 2025-09-20 10:44:59.585211 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-20 10:44:59.585221 | orchestrator | Saturday 20 September 2025 10:44:51 +0000 (0:00:00.963) 0:00:13.744 **** 2025-09-20 10:44:59.585231 | orchestrator | ok: [testbed-manager] 2025-09-20 10:44:59.585240 | orchestrator | 2025-09-20 10:44:59.585250 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-20 10:44:59.585260 | orchestrator | Saturday 20 September 2025 10:44:52 +0000 (0:00:01.174) 0:00:14.919 **** 2025-09-20 10:44:59.585270 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:44:59.585280 | orchestrator | 2025-09-20 10:44:59.585290 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-20 10:44:59.585299 | orchestrator | Saturday 20 September 2025 10:44:53 +0000 (0:00:00.804) 0:00:15.723 **** 2025-09-20 10:44:59.585309 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:44:59.585319 | orchestrator | 2025-09-20 10:44:59.585329 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-20 10:44:59.585359 | orchestrator | Saturday 20 September 2025 10:44:53 +0000 (0:00:00.167) 0:00:15.890 **** 2025-09-20 10:44:59.585369 | orchestrator | changed: [testbed-manager] 2025-09-20 10:44:59.585379 | orchestrator | 2025-09-20 10:44:59.585389 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-20 10:44:59.585400 | orchestrator | Saturday 20 September 2025 10:44:54 +0000 (0:00:00.974) 0:00:16.865 **** 2025-09-20 10:44:59.585410 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-20 10:44:59.585454 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-20 10:44:59.585466 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-20 10:44:59.585476 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-20 10:44:59.585487 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-20 10:44:59.585498 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-20 10:44:59.585508 | orchestrator | 2025-09-20 10:44:59.585519 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-20 10:44:59.585530 | orchestrator | Saturday 20 September 2025 10:44:56 +0000 (0:00:02.169) 0:00:19.034 **** 2025-09-20 10:44:59.585540 | orchestrator | ok: [testbed-manager] 2025-09-20 10:44:59.585551 | orchestrator | 2025-09-20 10:44:59.585562 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-20 10:44:59.585573 | orchestrator | Saturday 20 September 2025 10:44:57 +0000 (0:00:01.419) 0:00:20.454 **** 2025-09-20 10:44:59.585584 | orchestrator | changed: [testbed-manager] 2025-09-20 10:44:59.585594 | orchestrator | 2025-09-20 10:44:59.585606 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:44:59.585616 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 10:44:59.585627 | orchestrator | 2025-09-20 10:44:59.585638 | orchestrator | 2025-09-20 10:44:59.585649 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:44:59.585660 | orchestrator | Saturday 20 September 2025 10:44:59 +0000 (0:00:01.380) 0:00:21.834 **** 2025-09-20 10:44:59.585670 | orchestrator | =============================================================================== 2025-09-20 10:44:59.585681 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.88s 2025-09-20 10:44:59.585691 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.17s 2025-09-20 10:44:59.585702 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.42s 2025-09-20 10:44:59.585713 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2025-09-20 10:44:59.585739 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.28s 2025-09-20 10:44:59.585750 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2025-09-20 10:44:59.585760 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.17s 2025-09-20 10:44:59.585770 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.97s 2025-09-20 10:44:59.585779 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2025-09-20 10:44:59.585789 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-09-20 10:44:59.585799 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-20 10:44:59.585809 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-09-20 10:44:59.863479 | orchestrator | 2025-09-20 10:44:59.864879 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Sep 20 10:44:59 UTC 2025 2025-09-20 10:44:59.864934 | orchestrator | 2025-09-20 10:45:01.746467 | orchestrator | 2025-09-20 10:45:01 | INFO  | Collection nutshell is prepared for execution 2025-09-20 10:45:01.746572 | orchestrator | 2025-09-20 10:45:01 | INFO  | D [0] - dotfiles 2025-09-20 10:45:11.914546 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [0] - homer 2025-09-20 10:45:11.914639 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [0] - netdata 2025-09-20 10:45:11.914656 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [0] - openstackclient 2025-09-20 10:45:11.914668 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [0] - phpmyadmin 2025-09-20 10:45:11.914679 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [0] - common 2025-09-20 10:45:11.918931 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [1] -- loadbalancer 2025-09-20 10:45:11.918954 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [2] --- opensearch 2025-09-20 10:45:11.919416 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [2] --- mariadb-ng 2025-09-20 10:45:11.919655 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [3] ---- horizon 2025-09-20 10:45:11.920668 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [3] ---- keystone 2025-09-20 10:45:11.920686 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [4] ----- neutron 2025-09-20 10:45:11.920698 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [5] ------ wait-for-nova 2025-09-20 10:45:11.921113 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [5] ------ octavia 2025-09-20 10:45:11.922085 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [4] ----- barbican 2025-09-20 10:45:11.922307 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [4] ----- designate 2025-09-20 10:45:11.922386 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [4] ----- ironic 2025-09-20 10:45:11.922758 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [4] ----- placement 2025-09-20 10:45:11.922782 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [4] ----- magnum 2025-09-20 10:45:11.923273 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [1] -- openvswitch 2025-09-20 10:45:11.923574 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [2] --- ovn 2025-09-20 10:45:11.923711 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [1] -- memcached 2025-09-20 10:45:11.923805 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [1] -- redis 2025-09-20 10:45:11.924096 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [1] -- rabbitmq-ng 2025-09-20 10:45:11.924578 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [0] - kubernetes 2025-09-20 10:45:11.926917 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [1] -- kubeconfig 2025-09-20 10:45:11.927096 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [1] -- copy-kubeconfig 2025-09-20 10:45:11.927441 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [0] - ceph 2025-09-20 10:45:11.929586 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [1] -- ceph-pools 2025-09-20 10:45:11.929607 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [2] --- copy-ceph-keys 2025-09-20 10:45:11.929889 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [3] ---- cephclient 2025-09-20 10:45:11.930224 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-20 10:45:11.930248 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [4] ----- wait-for-keystone 2025-09-20 10:45:11.930689 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-20 10:45:11.932078 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [5] ------ glance 2025-09-20 10:45:11.932100 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [5] ------ cinder 2025-09-20 10:45:11.932112 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [5] ------ nova 2025-09-20 10:45:11.932146 | orchestrator | 2025-09-20 10:45:11 | INFO  | A [4] ----- prometheus 2025-09-20 10:45:11.932158 | orchestrator | 2025-09-20 10:45:11 | INFO  | D [5] ------ grafana 2025-09-20 10:45:12.092513 | orchestrator | 2025-09-20 10:45:12 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-20 10:45:12.092597 | orchestrator | 2025-09-20 10:45:12 | INFO  | Tasks are running in the background 2025-09-20 10:45:14.513938 | orchestrator | 2025-09-20 10:45:14 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-20 10:45:16.622402 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:16.622531 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:16.622937 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:16.625165 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:16.625695 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:16.626166 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:16.626760 | orchestrator | 2025-09-20 10:45:16 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:16.626784 | orchestrator | 2025-09-20 10:45:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:19.711655 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:19.711755 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:19.711768 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:19.711779 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:19.711789 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:19.711799 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:19.711809 | orchestrator | 2025-09-20 10:45:19 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:19.711819 | orchestrator | 2025-09-20 10:45:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:22.720418 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:22.720582 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:22.720599 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:22.720912 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:22.725005 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:22.726769 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:22.726793 | orchestrator | 2025-09-20 10:45:22 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:22.726806 | orchestrator | 2025-09-20 10:45:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:25.793816 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:25.793917 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:25.793932 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:25.793944 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:25.793955 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:25.793966 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:25.793977 | orchestrator | 2025-09-20 10:45:25 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:25.793989 | orchestrator | 2025-09-20 10:45:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:28.815978 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:28.827006 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:28.827096 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:28.829035 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:28.829412 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:28.830658 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:28.831036 | orchestrator | 2025-09-20 10:45:28 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:28.831058 | orchestrator | 2025-09-20 10:45:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:31.872348 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:31.872541 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:31.872561 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:31.872573 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:31.872584 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:31.872595 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:31.872606 | orchestrator | 2025-09-20 10:45:31 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:31.872617 | orchestrator | 2025-09-20 10:45:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:34.922283 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:34.922375 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:34.922963 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:34.923611 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:34.924323 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:34.924910 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:34.925453 | orchestrator | 2025-09-20 10:45:34 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state STARTED 2025-09-20 10:45:34.925575 | orchestrator | 2025-09-20 10:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:38.006613 | orchestrator | 2025-09-20 10:45:38.006701 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-20 10:45:38.006714 | orchestrator | 2025-09-20 10:45:38.006724 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-20 10:45:38.006732 | orchestrator | Saturday 20 September 2025 10:45:23 +0000 (0:00:00.414) 0:00:00.414 **** 2025-09-20 10:45:38.006741 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:45:38.006750 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:45:38.006759 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:45:38.006767 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:45:38.006775 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:45:38.006783 | orchestrator | changed: [testbed-manager] 2025-09-20 10:45:38.006790 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:45:38.006798 | orchestrator | 2025-09-20 10:45:38.006806 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-20 10:45:38.006814 | orchestrator | Saturday 20 September 2025 10:45:28 +0000 (0:00:04.412) 0:00:04.827 **** 2025-09-20 10:45:38.006823 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-20 10:45:38.006831 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-20 10:45:38.006839 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-20 10:45:38.006847 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-20 10:45:38.006855 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-20 10:45:38.006863 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-20 10:45:38.006871 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-20 10:45:38.006879 | orchestrator | 2025-09-20 10:45:38.006887 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-20 10:45:38.006895 | orchestrator | Saturday 20 September 2025 10:45:29 +0000 (0:00:01.636) 0:00:06.464 **** 2025-09-20 10:45:38.006913 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:29.096023', 'end': '2025-09-20 10:45:29.100117', 'delta': '0:00:00.004094', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.006926 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:28.979466', 'end': '2025-09-20 10:45:28.985684', 'delta': '0:00:00.006218', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.006951 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:28.953135', 'end': '2025-09-20 10:45:28.961698', 'delta': '0:00:00.008563', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.006984 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:28.950453', 'end': '2025-09-20 10:45:28.958783', 'delta': '0:00:00.008330', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.006994 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:29.062823', 'end': '2025-09-20 10:45:29.070665', 'delta': '0:00:00.007842', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.007334 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:29.419264', 'end': '2025-09-20 10:45:29.427591', 'delta': '0:00:00.008327', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.007352 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 10:45:29.440754', 'end': '2025-09-20 10:45:29.449393', 'delta': '0:00:00.008639', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 10:45:38.007397 | orchestrator | 2025-09-20 10:45:38.007408 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-20 10:45:38.007418 | orchestrator | Saturday 20 September 2025 10:45:31 +0000 (0:00:01.485) 0:00:07.950 **** 2025-09-20 10:45:38.007446 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-20 10:45:38.007456 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-20 10:45:38.007465 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-20 10:45:38.007474 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-20 10:45:38.007483 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-20 10:45:38.007492 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-20 10:45:38.007501 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-20 10:45:38.007510 | orchestrator | 2025-09-20 10:45:38.007520 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-20 10:45:38.007529 | orchestrator | Saturday 20 September 2025 10:45:32 +0000 (0:00:01.243) 0:00:09.193 **** 2025-09-20 10:45:38.007542 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-20 10:45:38.007551 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-20 10:45:38.007559 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-20 10:45:38.007567 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-20 10:45:38.007575 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-20 10:45:38.007583 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-20 10:45:38.007591 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-20 10:45:38.007598 | orchestrator | 2025-09-20 10:45:38.007606 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:45:38.007624 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007635 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007643 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007651 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007659 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007666 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007674 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:45:38.007682 | orchestrator | 2025-09-20 10:45:38.007690 | orchestrator | 2025-09-20 10:45:38.007698 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:45:38.007706 | orchestrator | Saturday 20 September 2025 10:45:36 +0000 (0:00:03.517) 0:00:12.710 **** 2025-09-20 10:45:38.007714 | orchestrator | =============================================================================== 2025-09-20 10:45:38.007722 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.41s 2025-09-20 10:45:38.007730 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.52s 2025-09-20 10:45:38.007745 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.64s 2025-09-20 10:45:38.007753 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.49s 2025-09-20 10:45:38.007761 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.24s 2025-09-20 10:45:38.007769 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:38.007777 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:38.007785 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:38.007793 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:38.007801 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:38.007808 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:38.007816 | orchestrator | 2025-09-20 10:45:37 | INFO  | Task 162dc9e6-4415-4339-b555-b511c59d19e5 is in state SUCCESS 2025-09-20 10:45:38.007824 | orchestrator | 2025-09-20 10:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:41.122388 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:41.122538 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:41.122561 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:41.122579 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:41.122595 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:41.122611 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:41.122627 | orchestrator | 2025-09-20 10:45:41 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:41.122644 | orchestrator | 2025-09-20 10:45:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:44.198783 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:44.200100 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:44.202084 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:44.204221 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:44.205139 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:44.206603 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:44.208828 | orchestrator | 2025-09-20 10:45:44 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:44.208928 | orchestrator | 2025-09-20 10:45:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:47.249317 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:47.251279 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:47.252577 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:47.253057 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:47.253749 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:47.256578 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:47.259010 | orchestrator | 2025-09-20 10:45:47 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:47.259026 | orchestrator | 2025-09-20 10:45:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:50.321413 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:50.321599 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:50.321625 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:50.321818 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:50.322329 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:50.323753 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:50.323894 | orchestrator | 2025-09-20 10:45:50 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:50.325860 | orchestrator | 2025-09-20 10:45:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:53.368530 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:53.370673 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:53.378385 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:53.379022 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:53.379512 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:53.380556 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:53.390697 | orchestrator | 2025-09-20 10:45:53 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:53.391579 | orchestrator | 2025-09-20 10:45:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:56.436343 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:56.437571 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:56.439428 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:56.442589 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:56.444714 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:56.449222 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:56.449750 | orchestrator | 2025-09-20 10:45:56 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:56.449786 | orchestrator | 2025-09-20 10:45:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:45:59.513874 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:45:59.514131 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:45:59.514959 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:45:59.515711 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:45:59.516365 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:45:59.517371 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:45:59.518798 | orchestrator | 2025-09-20 10:45:59 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:45:59.518872 | orchestrator | 2025-09-20 10:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:02.654264 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:02.654359 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:46:02.654368 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:46:02.654375 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:02.654382 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:02.654388 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:02.654395 | orchestrator | 2025-09-20 10:46:02 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:02.654402 | orchestrator | 2025-09-20 10:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:05.795430 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:05.795586 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state STARTED 2025-09-20 10:46:05.795601 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:46:05.795613 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:05.795624 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:05.795635 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:05.795646 | orchestrator | 2025-09-20 10:46:05 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:05.795658 | orchestrator | 2025-09-20 10:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:08.734897 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:08.735011 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task e93d9543-ebab-4651-b13b-3e7372b24ae0 is in state SUCCESS 2025-09-20 10:46:08.735897 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:46:08.735924 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:08.737275 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:08.737352 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:08.737925 | orchestrator | 2025-09-20 10:46:08 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:08.737957 | orchestrator | 2025-09-20 10:46:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:11.952508 | orchestrator | 2025-09-20 10:46:11 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:11.952623 | orchestrator | 2025-09-20 10:46:11 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state STARTED 2025-09-20 10:46:11.952638 | orchestrator | 2025-09-20 10:46:11 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:11.952650 | orchestrator | 2025-09-20 10:46:11 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:11.952661 | orchestrator | 2025-09-20 10:46:11 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:11.952672 | orchestrator | 2025-09-20 10:46:11 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:11.952684 | orchestrator | 2025-09-20 10:46:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:14.886535 | orchestrator | 2025-09-20 10:46:14 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:14.887299 | orchestrator | 2025-09-20 10:46:14 | INFO  | Task e0244ca3-2fc3-457b-af05-702121f8be60 is in state SUCCESS 2025-09-20 10:46:14.889505 | orchestrator | 2025-09-20 10:46:14 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:14.890625 | orchestrator | 2025-09-20 10:46:14 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:14.894106 | orchestrator | 2025-09-20 10:46:14 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:14.894139 | orchestrator | 2025-09-20 10:46:14 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:14.894152 | orchestrator | 2025-09-20 10:46:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:17.998197 | orchestrator | 2025-09-20 10:46:17 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:18.004594 | orchestrator | 2025-09-20 10:46:17 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:18.005823 | orchestrator | 2025-09-20 10:46:18 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:18.008474 | orchestrator | 2025-09-20 10:46:18 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:18.013052 | orchestrator | 2025-09-20 10:46:18 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:18.014860 | orchestrator | 2025-09-20 10:46:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:21.074697 | orchestrator | 2025-09-20 10:46:21 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:21.076545 | orchestrator | 2025-09-20 10:46:21 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:21.077489 | orchestrator | 2025-09-20 10:46:21 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:21.078178 | orchestrator | 2025-09-20 10:46:21 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:21.084764 | orchestrator | 2025-09-20 10:46:21 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:21.086231 | orchestrator | 2025-09-20 10:46:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:24.129784 | orchestrator | 2025-09-20 10:46:24 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:24.132730 | orchestrator | 2025-09-20 10:46:24 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:24.133099 | orchestrator | 2025-09-20 10:46:24 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:24.134663 | orchestrator | 2025-09-20 10:46:24 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:24.137978 | orchestrator | 2025-09-20 10:46:24 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:24.138064 | orchestrator | 2025-09-20 10:46:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:27.180282 | orchestrator | 2025-09-20 10:46:27 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:27.180390 | orchestrator | 2025-09-20 10:46:27 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:27.181878 | orchestrator | 2025-09-20 10:46:27 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:27.181913 | orchestrator | 2025-09-20 10:46:27 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:27.182310 | orchestrator | 2025-09-20 10:46:27 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:27.182326 | orchestrator | 2025-09-20 10:46:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:30.221395 | orchestrator | 2025-09-20 10:46:30 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:30.221592 | orchestrator | 2025-09-20 10:46:30 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:30.221619 | orchestrator | 2025-09-20 10:46:30 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:30.222564 | orchestrator | 2025-09-20 10:46:30 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:30.223054 | orchestrator | 2025-09-20 10:46:30 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:30.223075 | orchestrator | 2025-09-20 10:46:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:33.254154 | orchestrator | 2025-09-20 10:46:33 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:33.254582 | orchestrator | 2025-09-20 10:46:33 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:33.254773 | orchestrator | 2025-09-20 10:46:33 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:33.255749 | orchestrator | 2025-09-20 10:46:33 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:33.255791 | orchestrator | 2025-09-20 10:46:33 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:33.255806 | orchestrator | 2025-09-20 10:46:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:36.294172 | orchestrator | 2025-09-20 10:46:36 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:36.297316 | orchestrator | 2025-09-20 10:46:36 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:36.299787 | orchestrator | 2025-09-20 10:46:36 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:36.302039 | orchestrator | 2025-09-20 10:46:36 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:36.303564 | orchestrator | 2025-09-20 10:46:36 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:36.303580 | orchestrator | 2025-09-20 10:46:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:39.363626 | orchestrator | 2025-09-20 10:46:39 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:39.364490 | orchestrator | 2025-09-20 10:46:39 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state STARTED 2025-09-20 10:46:39.367944 | orchestrator | 2025-09-20 10:46:39 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:39.370389 | orchestrator | 2025-09-20 10:46:39 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:39.374314 | orchestrator | 2025-09-20 10:46:39 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state STARTED 2025-09-20 10:46:39.374338 | orchestrator | 2025-09-20 10:46:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:42.413605 | orchestrator | 2025-09-20 10:46:42 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:42.414287 | orchestrator | 2025-09-20 10:46:42 | INFO  | Task cd809391-86d4-499a-bedf-f412d0f640d5 is in state SUCCESS 2025-09-20 10:46:42.414995 | orchestrator | 2025-09-20 10:46:42.415033 | orchestrator | 2025-09-20 10:46:42.415045 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-20 10:46:42.415058 | orchestrator | 2025-09-20 10:46:42.415070 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-20 10:46:42.415082 | orchestrator | Saturday 20 September 2025 10:45:25 +0000 (0:00:01.603) 0:00:01.603 **** 2025-09-20 10:46:42.415094 | orchestrator | ok: [testbed-manager] => { 2025-09-20 10:46:42.415107 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-20 10:46:42.415120 | orchestrator | } 2025-09-20 10:46:42.415131 | orchestrator | 2025-09-20 10:46:42.415143 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-20 10:46:42.415154 | orchestrator | Saturday 20 September 2025 10:45:25 +0000 (0:00:00.352) 0:00:01.956 **** 2025-09-20 10:46:42.415165 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.415178 | orchestrator | 2025-09-20 10:46:42.415188 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-20 10:46:42.415199 | orchestrator | Saturday 20 September 2025 10:45:27 +0000 (0:00:01.651) 0:00:03.607 **** 2025-09-20 10:46:42.415210 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-20 10:46:42.415221 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-20 10:46:42.415232 | orchestrator | 2025-09-20 10:46:42.415243 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-20 10:46:42.415254 | orchestrator | Saturday 20 September 2025 10:45:29 +0000 (0:00:02.073) 0:00:05.680 **** 2025-09-20 10:46:42.415265 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.415276 | orchestrator | 2025-09-20 10:46:42.415366 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-20 10:46:42.415381 | orchestrator | Saturday 20 September 2025 10:45:31 +0000 (0:00:02.373) 0:00:08.054 **** 2025-09-20 10:46:42.415393 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.415429 | orchestrator | 2025-09-20 10:46:42.415442 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-20 10:46:42.415496 | orchestrator | Saturday 20 September 2025 10:45:33 +0000 (0:00:01.969) 0:00:10.024 **** 2025-09-20 10:46:42.415507 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-20 10:46:42.415516 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.415526 | orchestrator | 2025-09-20 10:46:42.415536 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-20 10:46:42.415545 | orchestrator | Saturday 20 September 2025 10:46:00 +0000 (0:00:26.971) 0:00:36.995 **** 2025-09-20 10:46:42.415555 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.415564 | orchestrator | 2025-09-20 10:46:42.415573 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:46:42.415584 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.415595 | orchestrator | 2025-09-20 10:46:42.415605 | orchestrator | 2025-09-20 10:46:42.415615 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:46:42.415624 | orchestrator | Saturday 20 September 2025 10:46:05 +0000 (0:00:05.408) 0:00:42.404 **** 2025-09-20 10:46:42.415634 | orchestrator | =============================================================================== 2025-09-20 10:46:42.415643 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.97s 2025-09-20 10:46:42.415652 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.41s 2025-09-20 10:46:42.415662 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.37s 2025-09-20 10:46:42.415671 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.07s 2025-09-20 10:46:42.415681 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.97s 2025-09-20 10:46:42.415690 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.65s 2025-09-20 10:46:42.415700 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.35s 2025-09-20 10:46:42.415709 | orchestrator | 2025-09-20 10:46:42.415719 | orchestrator | 2025-09-20 10:46:42.415728 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-20 10:46:42.415738 | orchestrator | 2025-09-20 10:46:42.415748 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-20 10:46:42.415758 | orchestrator | Saturday 20 September 2025 10:45:24 +0000 (0:00:01.080) 0:00:01.080 **** 2025-09-20 10:46:42.415768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-20 10:46:42.415779 | orchestrator | 2025-09-20 10:46:42.415788 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-20 10:46:42.415798 | orchestrator | Saturday 20 September 2025 10:45:25 +0000 (0:00:00.663) 0:00:01.743 **** 2025-09-20 10:46:42.415807 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-20 10:46:42.415817 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-20 10:46:42.415826 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-20 10:46:42.415836 | orchestrator | 2025-09-20 10:46:42.415845 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-20 10:46:42.415855 | orchestrator | Saturday 20 September 2025 10:45:27 +0000 (0:00:02.378) 0:00:04.121 **** 2025-09-20 10:46:42.415864 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.415874 | orchestrator | 2025-09-20 10:46:42.415884 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-20 10:46:42.415894 | orchestrator | Saturday 20 September 2025 10:45:29 +0000 (0:00:01.796) 0:00:05.917 **** 2025-09-20 10:46:42.415916 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-20 10:46:42.415935 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.415945 | orchestrator | 2025-09-20 10:46:42.415955 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-20 10:46:42.415965 | orchestrator | Saturday 20 September 2025 10:46:04 +0000 (0:00:34.707) 0:00:40.625 **** 2025-09-20 10:46:42.415974 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.415984 | orchestrator | 2025-09-20 10:46:42.416076 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-20 10:46:42.416087 | orchestrator | Saturday 20 September 2025 10:46:06 +0000 (0:00:02.506) 0:00:43.131 **** 2025-09-20 10:46:42.416096 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.416106 | orchestrator | 2025-09-20 10:46:42.416122 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-20 10:46:42.416132 | orchestrator | Saturday 20 September 2025 10:46:07 +0000 (0:00:00.790) 0:00:43.922 **** 2025-09-20 10:46:42.416141 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.416151 | orchestrator | 2025-09-20 10:46:42.416161 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-20 10:46:42.416170 | orchestrator | Saturday 20 September 2025 10:46:09 +0000 (0:00:02.082) 0:00:46.004 **** 2025-09-20 10:46:42.416180 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.416190 | orchestrator | 2025-09-20 10:46:42.416199 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-20 10:46:42.416209 | orchestrator | Saturday 20 September 2025 10:46:10 +0000 (0:00:01.045) 0:00:47.050 **** 2025-09-20 10:46:42.416219 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.416228 | orchestrator | 2025-09-20 10:46:42.416238 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-20 10:46:42.416248 | orchestrator | Saturday 20 September 2025 10:46:11 +0000 (0:00:00.771) 0:00:47.822 **** 2025-09-20 10:46:42.416258 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.416267 | orchestrator | 2025-09-20 10:46:42.416277 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:46:42.416289 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.416306 | orchestrator | 2025-09-20 10:46:42.416326 | orchestrator | 2025-09-20 10:46:42.416349 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:46:42.416365 | orchestrator | Saturday 20 September 2025 10:46:12 +0000 (0:00:01.220) 0:00:49.042 **** 2025-09-20 10:46:42.416381 | orchestrator | =============================================================================== 2025-09-20 10:46:42.416396 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.71s 2025-09-20 10:46:42.416410 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.51s 2025-09-20 10:46:42.416426 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.38s 2025-09-20 10:46:42.416443 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.08s 2025-09-20 10:46:42.416487 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.80s 2025-09-20 10:46:42.416503 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.22s 2025-09-20 10:46:42.416520 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.05s 2025-09-20 10:46:42.416536 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.79s 2025-09-20 10:46:42.416552 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.77s 2025-09-20 10:46:42.416562 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.66s 2025-09-20 10:46:42.416572 | orchestrator | 2025-09-20 10:46:42.416581 | orchestrator | 2025-09-20 10:46:42.416591 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-20 10:46:42.416601 | orchestrator | 2025-09-20 10:46:42.416610 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-20 10:46:42.416630 | orchestrator | Saturday 20 September 2025 10:45:43 +0000 (0:00:00.432) 0:00:00.432 **** 2025-09-20 10:46:42.416640 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.416649 | orchestrator | 2025-09-20 10:46:42.416659 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-20 10:46:42.416669 | orchestrator | Saturday 20 September 2025 10:45:44 +0000 (0:00:01.137) 0:00:01.570 **** 2025-09-20 10:46:42.416679 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-20 10:46:42.416688 | orchestrator | 2025-09-20 10:46:42.416698 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-20 10:46:42.416707 | orchestrator | Saturday 20 September 2025 10:45:45 +0000 (0:00:00.830) 0:00:02.400 **** 2025-09-20 10:46:42.416717 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.416726 | orchestrator | 2025-09-20 10:46:42.416736 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-20 10:46:42.416745 | orchestrator | Saturday 20 September 2025 10:45:46 +0000 (0:00:01.532) 0:00:03.933 **** 2025-09-20 10:46:42.416755 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-20 10:46:42.416765 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.416774 | orchestrator | 2025-09-20 10:46:42.416784 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-20 10:46:42.416794 | orchestrator | Saturday 20 September 2025 10:46:34 +0000 (0:00:47.969) 0:00:51.902 **** 2025-09-20 10:46:42.416803 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.416813 | orchestrator | 2025-09-20 10:46:42.416822 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:46:42.416832 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.416842 | orchestrator | 2025-09-20 10:46:42.416852 | orchestrator | 2025-09-20 10:46:42.416862 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:46:42.416881 | orchestrator | Saturday 20 September 2025 10:46:39 +0000 (0:00:04.299) 0:00:56.202 **** 2025-09-20 10:46:42.416891 | orchestrator | =============================================================================== 2025-09-20 10:46:42.416901 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 47.97s 2025-09-20 10:46:42.416911 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.30s 2025-09-20 10:46:42.416920 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.53s 2025-09-20 10:46:42.416930 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.14s 2025-09-20 10:46:42.416945 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.83s 2025-09-20 10:46:42.416955 | orchestrator | 2025-09-20 10:46:42 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:42.417729 | orchestrator | 2025-09-20 10:46:42 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:42.418981 | orchestrator | 2025-09-20 10:46:42.419008 | orchestrator | 2025-09-20 10:46:42 | INFO  | Task 78fc3a61-9a37-4b39-90f6-c4fedfcfbad8 is in state SUCCESS 2025-09-20 10:46:42.419390 | orchestrator | 2025-09-20 10:46:42.419418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:46:42.419430 | orchestrator | 2025-09-20 10:46:42.419441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:46:42.419481 | orchestrator | Saturday 20 September 2025 10:45:22 +0000 (0:00:00.428) 0:00:00.428 **** 2025-09-20 10:46:42.419493 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-20 10:46:42.419504 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-20 10:46:42.419515 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-20 10:46:42.419526 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-20 10:46:42.419537 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-20 10:46:42.419559 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-20 10:46:42.419570 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-20 10:46:42.419581 | orchestrator | 2025-09-20 10:46:42.419592 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-20 10:46:42.419603 | orchestrator | 2025-09-20 10:46:42.419613 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-20 10:46:42.419624 | orchestrator | Saturday 20 September 2025 10:45:25 +0000 (0:00:02.550) 0:00:02.979 **** 2025-09-20 10:46:42.419652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:46:42.419671 | orchestrator | 2025-09-20 10:46:42.419682 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-20 10:46:42.419693 | orchestrator | Saturday 20 September 2025 10:45:27 +0000 (0:00:01.840) 0:00:04.819 **** 2025-09-20 10:46:42.419704 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.419715 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:46:42.419726 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:46:42.419736 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:46:42.419747 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:46:42.419757 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:46:42.419768 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:46:42.419779 | orchestrator | 2025-09-20 10:46:42.419789 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-20 10:46:42.419800 | orchestrator | Saturday 20 September 2025 10:45:30 +0000 (0:00:03.019) 0:00:07.838 **** 2025-09-20 10:46:42.419811 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:46:42.419822 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:46:42.419832 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:46:42.419843 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:46:42.419853 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:46:42.419864 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:46:42.419875 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.419886 | orchestrator | 2025-09-20 10:46:42.419897 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-20 10:46:42.419908 | orchestrator | Saturday 20 September 2025 10:45:33 +0000 (0:00:03.314) 0:00:11.153 **** 2025-09-20 10:46:42.419918 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.419929 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:46:42.419940 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:46:42.419950 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:46:42.419961 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:46:42.419971 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:46:42.419982 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:46:42.419993 | orchestrator | 2025-09-20 10:46:42.420004 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-20 10:46:42.420015 | orchestrator | Saturday 20 September 2025 10:45:36 +0000 (0:00:02.635) 0:00:13.789 **** 2025-09-20 10:46:42.420025 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:46:42.420042 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:46:42.420069 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:46:42.420094 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:46:42.420112 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:46:42.420131 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:46:42.420148 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.420167 | orchestrator | 2025-09-20 10:46:42.420186 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-20 10:46:42.420207 | orchestrator | Saturday 20 September 2025 10:45:49 +0000 (0:00:13.920) 0:00:27.710 **** 2025-09-20 10:46:42.420226 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:46:42.420245 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:46:42.420278 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:46:42.420296 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:46:42.420314 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:46:42.420331 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:46:42.420349 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.420366 | orchestrator | 2025-09-20 10:46:42.420383 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-20 10:46:42.420402 | orchestrator | Saturday 20 September 2025 10:46:19 +0000 (0:00:29.750) 0:00:57.460 **** 2025-09-20 10:46:42.420421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:46:42.420441 | orchestrator | 2025-09-20 10:46:42.420484 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-20 10:46:42.420513 | orchestrator | Saturday 20 September 2025 10:46:22 +0000 (0:00:02.469) 0:00:59.929 **** 2025-09-20 10:46:42.420530 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-20 10:46:42.420548 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-20 10:46:42.420565 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-20 10:46:42.420583 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-20 10:46:42.420618 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-20 10:46:42.420638 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-20 10:46:42.420658 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-20 10:46:42.420675 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-20 10:46:42.420694 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-20 10:46:42.420713 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-20 10:46:42.420729 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-20 10:46:42.420740 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-20 10:46:42.420751 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-20 10:46:42.420762 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-20 10:46:42.420773 | orchestrator | 2025-09-20 10:46:42.420784 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-20 10:46:42.420796 | orchestrator | Saturday 20 September 2025 10:46:27 +0000 (0:00:05.150) 0:01:05.080 **** 2025-09-20 10:46:42.420815 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.420833 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:46:42.420852 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:46:42.420871 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:46:42.420888 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:46:42.420907 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:46:42.420926 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:46:42.420945 | orchestrator | 2025-09-20 10:46:42.420958 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-20 10:46:42.420969 | orchestrator | Saturday 20 September 2025 10:46:28 +0000 (0:00:01.046) 0:01:06.127 **** 2025-09-20 10:46:42.420986 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.421004 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:46:42.421023 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:46:42.421041 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:46:42.421060 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:46:42.421078 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:46:42.421096 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:46:42.421114 | orchestrator | 2025-09-20 10:46:42.421133 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-20 10:46:42.421151 | orchestrator | Saturday 20 September 2025 10:46:29 +0000 (0:00:01.450) 0:01:07.577 **** 2025-09-20 10:46:42.421170 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.421188 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:46:42.421219 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:46:42.421239 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:46:42.421257 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:46:42.421275 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:46:42.421293 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:46:42.421310 | orchestrator | 2025-09-20 10:46:42.421329 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-20 10:46:42.421348 | orchestrator | Saturday 20 September 2025 10:46:31 +0000 (0:00:01.516) 0:01:09.094 **** 2025-09-20 10:46:42.421366 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:46:42.421385 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:46:42.421404 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:46:42.421421 | orchestrator | ok: [testbed-manager] 2025-09-20 10:46:42.421440 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:46:42.421498 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:46:42.421517 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:46:42.421536 | orchestrator | 2025-09-20 10:46:42.421556 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-20 10:46:42.421575 | orchestrator | Saturday 20 September 2025 10:46:34 +0000 (0:00:02.718) 0:01:11.812 **** 2025-09-20 10:46:42.421593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-20 10:46:42.421614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:46:42.421634 | orchestrator | 2025-09-20 10:46:42.421653 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-20 10:46:42.421672 | orchestrator | Saturday 20 September 2025 10:46:35 +0000 (0:00:01.488) 0:01:13.301 **** 2025-09-20 10:46:42.421689 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.421707 | orchestrator | 2025-09-20 10:46:42.421725 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-20 10:46:42.421743 | orchestrator | Saturday 20 September 2025 10:46:37 +0000 (0:00:02.055) 0:01:15.357 **** 2025-09-20 10:46:42.421761 | orchestrator | changed: [testbed-manager] 2025-09-20 10:46:42.421779 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:46:42.421797 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:46:42.421814 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:46:42.421832 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:46:42.421850 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:46:42.421867 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:46:42.421885 | orchestrator | 2025-09-20 10:46:42.421902 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:46:42.421921 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.421940 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.421965 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.421978 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.421997 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.422009 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.422075 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:46:42.422097 | orchestrator | 2025-09-20 10:46:42.422108 | orchestrator | 2025-09-20 10:46:42.422120 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:46:42.422131 | orchestrator | Saturday 20 September 2025 10:46:41 +0000 (0:00:04.157) 0:01:19.515 **** 2025-09-20 10:46:42.422146 | orchestrator | =============================================================================== 2025-09-20 10:46:42.422165 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 29.75s 2025-09-20 10:46:42.422183 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.92s 2025-09-20 10:46:42.422201 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.15s 2025-09-20 10:46:42.422219 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.16s 2025-09-20 10:46:42.422238 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.31s 2025-09-20 10:46:42.422257 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.02s 2025-09-20 10:46:42.422276 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.72s 2025-09-20 10:46:42.422292 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.64s 2025-09-20 10:46:42.422303 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.55s 2025-09-20 10:46:42.422314 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.47s 2025-09-20 10:46:42.422325 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.06s 2025-09-20 10:46:42.422336 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.84s 2025-09-20 10:46:42.422347 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.52s 2025-09-20 10:46:42.422357 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.49s 2025-09-20 10:46:42.422368 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.45s 2025-09-20 10:46:42.422379 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.05s 2025-09-20 10:46:42.422390 | orchestrator | 2025-09-20 10:46:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:45.471291 | orchestrator | 2025-09-20 10:46:45 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:45.472405 | orchestrator | 2025-09-20 10:46:45 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:45.474012 | orchestrator | 2025-09-20 10:46:45 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:45.474099 | orchestrator | 2025-09-20 10:46:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:48.525536 | orchestrator | 2025-09-20 10:46:48 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:48.527542 | orchestrator | 2025-09-20 10:46:48 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:48.530202 | orchestrator | 2025-09-20 10:46:48 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:48.530323 | orchestrator | 2025-09-20 10:46:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:51.571042 | orchestrator | 2025-09-20 10:46:51 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:51.571955 | orchestrator | 2025-09-20 10:46:51 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:51.573206 | orchestrator | 2025-09-20 10:46:51 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:51.575787 | orchestrator | 2025-09-20 10:46:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:54.613752 | orchestrator | 2025-09-20 10:46:54 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:54.613913 | orchestrator | 2025-09-20 10:46:54 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:54.614679 | orchestrator | 2025-09-20 10:46:54 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:54.614788 | orchestrator | 2025-09-20 10:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:46:57.661236 | orchestrator | 2025-09-20 10:46:57 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:46:57.663160 | orchestrator | 2025-09-20 10:46:57 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:46:57.665575 | orchestrator | 2025-09-20 10:46:57 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:46:57.665645 | orchestrator | 2025-09-20 10:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:00.707823 | orchestrator | 2025-09-20 10:47:00 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:00.710139 | orchestrator | 2025-09-20 10:47:00 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:00.715070 | orchestrator | 2025-09-20 10:47:00 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:00.715155 | orchestrator | 2025-09-20 10:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:03.760677 | orchestrator | 2025-09-20 10:47:03 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:03.763425 | orchestrator | 2025-09-20 10:47:03 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:03.765144 | orchestrator | 2025-09-20 10:47:03 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:03.765171 | orchestrator | 2025-09-20 10:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:06.801209 | orchestrator | 2025-09-20 10:47:06 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:06.802701 | orchestrator | 2025-09-20 10:47:06 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:06.804628 | orchestrator | 2025-09-20 10:47:06 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:06.805189 | orchestrator | 2025-09-20 10:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:09.847937 | orchestrator | 2025-09-20 10:47:09 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:09.848738 | orchestrator | 2025-09-20 10:47:09 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:09.849111 | orchestrator | 2025-09-20 10:47:09 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:09.849543 | orchestrator | 2025-09-20 10:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:12.882138 | orchestrator | 2025-09-20 10:47:12 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:12.883615 | orchestrator | 2025-09-20 10:47:12 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:12.883766 | orchestrator | 2025-09-20 10:47:12 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:12.883841 | orchestrator | 2025-09-20 10:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:15.924900 | orchestrator | 2025-09-20 10:47:15 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:15.927195 | orchestrator | 2025-09-20 10:47:15 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:15.929227 | orchestrator | 2025-09-20 10:47:15 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:15.929279 | orchestrator | 2025-09-20 10:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:18.965387 | orchestrator | 2025-09-20 10:47:18 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:18.966921 | orchestrator | 2025-09-20 10:47:18 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:18.968443 | orchestrator | 2025-09-20 10:47:18 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:18.968600 | orchestrator | 2025-09-20 10:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:22.015899 | orchestrator | 2025-09-20 10:47:22 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:22.018854 | orchestrator | 2025-09-20 10:47:22 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:22.020547 | orchestrator | 2025-09-20 10:47:22 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:22.021127 | orchestrator | 2025-09-20 10:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:25.058261 | orchestrator | 2025-09-20 10:47:25 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:25.058596 | orchestrator | 2025-09-20 10:47:25 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:25.061550 | orchestrator | 2025-09-20 10:47:25 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:25.062094 | orchestrator | 2025-09-20 10:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:28.098933 | orchestrator | 2025-09-20 10:47:28 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:28.099023 | orchestrator | 2025-09-20 10:47:28 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:28.099035 | orchestrator | 2025-09-20 10:47:28 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:28.099043 | orchestrator | 2025-09-20 10:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:31.135115 | orchestrator | 2025-09-20 10:47:31 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:31.136604 | orchestrator | 2025-09-20 10:47:31 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:31.138125 | orchestrator | 2025-09-20 10:47:31 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:31.138163 | orchestrator | 2025-09-20 10:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:34.178305 | orchestrator | 2025-09-20 10:47:34 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:34.179798 | orchestrator | 2025-09-20 10:47:34 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state STARTED 2025-09-20 10:47:34.182075 | orchestrator | 2025-09-20 10:47:34 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:34.182123 | orchestrator | 2025-09-20 10:47:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:37.222213 | orchestrator | 2025-09-20 10:47:37 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:37.226530 | orchestrator | 2025-09-20 10:47:37 | INFO  | Task c5df58d2-8d46-49a0-b17e-c89c1d88c128 is in state SUCCESS 2025-09-20 10:47:37.226609 | orchestrator | 2025-09-20 10:47:37.229119 | orchestrator | 2025-09-20 10:47:37.229159 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-20 10:47:37.229170 | orchestrator | 2025-09-20 10:47:37.229180 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-20 10:47:37.229191 | orchestrator | Saturday 20 September 2025 10:45:16 +0000 (0:00:00.226) 0:00:00.226 **** 2025-09-20 10:47:37.229496 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:47:37.229522 | orchestrator | 2025-09-20 10:47:37.229533 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-20 10:47:37.229544 | orchestrator | Saturday 20 September 2025 10:45:17 +0000 (0:00:01.029) 0:00:01.256 **** 2025-09-20 10:47:37.229554 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.229564 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.229574 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.229584 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.229594 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.229605 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.229616 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.230265 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.230352 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.230374 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230388 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.230398 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230408 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 10:47:37.230418 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.230428 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.230452 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230487 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230497 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 10:47:37.230507 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230517 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230526 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 10:47:37.230536 | orchestrator | 2025-09-20 10:47:37.230547 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-20 10:47:37.230557 | orchestrator | Saturday 20 September 2025 10:45:21 +0000 (0:00:04.019) 0:00:05.276 **** 2025-09-20 10:47:37.230568 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:47:37.230579 | orchestrator | 2025-09-20 10:47:37.230589 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-20 10:47:37.230599 | orchestrator | Saturday 20 September 2025 10:45:22 +0000 (0:00:01.384) 0:00:06.660 **** 2025-09-20 10:47:37.230635 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230807 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.230919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.230995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.231035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.231047 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.231057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.231067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.231130 | orchestrator | 2025-09-20 10:47:37.231150 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-20 10:47:37.231165 | orchestrator | Saturday 20 September 2025 10:45:27 +0000 (0:00:04.791) 0:00:11.451 **** 2025-09-20 10:47:37.231191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231254 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231357 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231374 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:47:37.231392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231479 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:47:37.231491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231561 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:47:37.231571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231613 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:47:37.231622 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:47:37.231632 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:47:37.231642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231682 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:47:37.231692 | orchestrator | 2025-09-20 10:47:37.231702 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-20 10:47:37.231712 | orchestrator | Saturday 20 September 2025 10:45:29 +0000 (0:00:02.197) 0:00:13.649 **** 2025-09-20 10:47:37.231722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231770 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231803 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:47:37.231813 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:47:37.231836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231874 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:47:37.231888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231919 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:47:37.231929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.231966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.231984 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:47:37.231994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.232009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.232019 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:47:37.232029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 10:47:37.232039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.232050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.232060 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:47:37.232069 | orchestrator | 2025-09-20 10:47:37.232080 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-20 10:47:37.232090 | orchestrator | Saturday 20 September 2025 10:45:31 +0000 (0:00:01.903) 0:00:15.552 **** 2025-09-20 10:47:37.232100 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:47:37.232110 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:47:37.232120 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:47:37.232134 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:47:37.232151 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:47:37.232176 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:47:37.232193 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:47:37.232210 | orchestrator | 2025-09-20 10:47:37.232277 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-20 10:47:37.232296 | orchestrator | Saturday 20 September 2025 10:45:32 +0000 (0:00:00.610) 0:00:16.162 **** 2025-09-20 10:47:37.232327 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:47:37.232344 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:47:37.232361 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:47:37.232376 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:47:37.232392 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:47:37.232409 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:47:37.232425 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:47:37.232442 | orchestrator | 2025-09-20 10:47:37.232452 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-20 10:47:37.232499 | orchestrator | Saturday 20 September 2025 10:45:33 +0000 (0:00:01.142) 0:00:17.305 **** 2025-09-20 10:47:37.232511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232522 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.232631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232656 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232746 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.232776 | orchestrator | 2025-09-20 10:47:37.232786 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-20 10:47:37.232802 | orchestrator | Saturday 20 September 2025 10:45:40 +0000 (0:00:06.693) 0:00:23.999 **** 2025-09-20 10:47:37.232812 | orchestrator | [WARNING]: Skipped 2025-09-20 10:47:37.232824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-20 10:47:37.232834 | orchestrator | to this access issue: 2025-09-20 10:47:37.232844 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-20 10:47:37.232853 | orchestrator | directory 2025-09-20 10:47:37.232863 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:47:37.232873 | orchestrator | 2025-09-20 10:47:37.232883 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-20 10:47:37.232892 | orchestrator | Saturday 20 September 2025 10:45:41 +0000 (0:00:01.707) 0:00:25.707 **** 2025-09-20 10:47:37.232902 | orchestrator | [WARNING]: Skipped 2025-09-20 10:47:37.232912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-20 10:47:37.232927 | orchestrator | to this access issue: 2025-09-20 10:47:37.232938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-20 10:47:37.232948 | orchestrator | directory 2025-09-20 10:47:37.232957 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:47:37.232967 | orchestrator | 2025-09-20 10:47:37.232977 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-20 10:47:37.233022 | orchestrator | Saturday 20 September 2025 10:45:43 +0000 (0:00:02.079) 0:00:27.786 **** 2025-09-20 10:47:37.233034 | orchestrator | [WARNING]: Skipped 2025-09-20 10:47:37.233044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-20 10:47:37.233054 | orchestrator | to this access issue: 2025-09-20 10:47:37.233064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-20 10:47:37.233074 | orchestrator | directory 2025-09-20 10:47:37.233083 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:47:37.233093 | orchestrator | 2025-09-20 10:47:37.233103 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-20 10:47:37.233113 | orchestrator | Saturday 20 September 2025 10:45:44 +0000 (0:00:01.082) 0:00:28.869 **** 2025-09-20 10:47:37.233123 | orchestrator | [WARNING]: Skipped 2025-09-20 10:47:37.233132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-20 10:47:37.233142 | orchestrator | to this access issue: 2025-09-20 10:47:37.233152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-20 10:47:37.233161 | orchestrator | directory 2025-09-20 10:47:37.233171 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 10:47:37.233181 | orchestrator | 2025-09-20 10:47:37.233191 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-20 10:47:37.233200 | orchestrator | Saturday 20 September 2025 10:45:46 +0000 (0:00:01.514) 0:00:30.383 **** 2025-09-20 10:47:37.233210 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.233224 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.233241 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.233257 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.233272 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.233290 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.233306 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.233322 | orchestrator | 2025-09-20 10:47:37.233339 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-20 10:47:37.233358 | orchestrator | Saturday 20 September 2025 10:45:51 +0000 (0:00:04.944) 0:00:35.327 **** 2025-09-20 10:47:37.233377 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233394 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233428 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233446 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233529 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233548 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233563 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 10:47:37.233574 | orchestrator | 2025-09-20 10:47:37.233584 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-20 10:47:37.233593 | orchestrator | Saturday 20 September 2025 10:45:54 +0000 (0:00:02.707) 0:00:38.035 **** 2025-09-20 10:47:37.233603 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.233613 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.233622 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.233632 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.233641 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.233651 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.233660 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.233669 | orchestrator | 2025-09-20 10:47:37.233679 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-20 10:47:37.233689 | orchestrator | Saturday 20 September 2025 10:45:56 +0000 (0:00:02.485) 0:00:40.521 **** 2025-09-20 10:47:37.233699 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233730 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233781 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233802 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233812 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233839 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233866 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233876 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233897 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233912 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233928 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.233939 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:47:37.233966 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.233976 | orchestrator | 2025-09-20 10:47:37.233986 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-20 10:47:37.233996 | orchestrator | Saturday 20 September 2025 10:46:00 +0000 (0:00:03.835) 0:00:44.357 **** 2025-09-20 10:47:37.234005 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234092 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234132 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234142 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234151 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 10:47:37.234161 | orchestrator | 2025-09-20 10:47:37.234171 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-20 10:47:37.234180 | orchestrator | Saturday 20 September 2025 10:46:04 +0000 (0:00:04.517) 0:00:48.874 **** 2025-09-20 10:47:37.234190 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234199 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234209 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234251 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234268 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 10:47:37.234285 | orchestrator | 2025-09-20 10:47:37.234302 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-20 10:47:37.234320 | orchestrator | Saturday 20 September 2025 10:46:07 +0000 (0:00:02.423) 0:00:51.298 **** 2025-09-20 10:47:37.234336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234373 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234519 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 10:47:37.234632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234642 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:47:37.234744 | orchestrator | 2025-09-20 10:47:37.234760 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-20 10:47:37.234776 | orchestrator | Saturday 20 September 2025 10:46:11 +0000 (0:00:03.913) 0:00:55.212 **** 2025-09-20 10:47:37.234793 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.234809 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.234827 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.234842 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.234860 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.234870 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.234880 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.234890 | orchestrator | 2025-09-20 10:47:37.234900 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-20 10:47:37.234909 | orchestrator | Saturday 20 September 2025 10:46:13 +0000 (0:00:01.966) 0:00:57.178 **** 2025-09-20 10:47:37.234919 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.234929 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.234938 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.234948 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.234957 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.234967 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.234976 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.234986 | orchestrator | 2025-09-20 10:47:37.234995 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235005 | orchestrator | Saturday 20 September 2025 10:46:14 +0000 (0:00:01.468) 0:00:58.646 **** 2025-09-20 10:47:37.235023 | orchestrator | 2025-09-20 10:47:37.235033 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235043 | orchestrator | Saturday 20 September 2025 10:46:14 +0000 (0:00:00.073) 0:00:58.720 **** 2025-09-20 10:47:37.235053 | orchestrator | 2025-09-20 10:47:37.235063 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235073 | orchestrator | Saturday 20 September 2025 10:46:14 +0000 (0:00:00.071) 0:00:58.792 **** 2025-09-20 10:47:37.235082 | orchestrator | 2025-09-20 10:47:37.235092 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235101 | orchestrator | Saturday 20 September 2025 10:46:14 +0000 (0:00:00.116) 0:00:58.909 **** 2025-09-20 10:47:37.235111 | orchestrator | 2025-09-20 10:47:37.235121 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235130 | orchestrator | Saturday 20 September 2025 10:46:15 +0000 (0:00:00.333) 0:00:59.243 **** 2025-09-20 10:47:37.235140 | orchestrator | 2025-09-20 10:47:37.235149 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235159 | orchestrator | Saturday 20 September 2025 10:46:15 +0000 (0:00:00.064) 0:00:59.307 **** 2025-09-20 10:47:37.235169 | orchestrator | 2025-09-20 10:47:37.235178 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 10:47:37.235188 | orchestrator | Saturday 20 September 2025 10:46:15 +0000 (0:00:00.064) 0:00:59.372 **** 2025-09-20 10:47:37.235197 | orchestrator | 2025-09-20 10:47:37.235207 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-20 10:47:37.235224 | orchestrator | Saturday 20 September 2025 10:46:15 +0000 (0:00:00.086) 0:00:59.458 **** 2025-09-20 10:47:37.235235 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.235252 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.235267 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.235283 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.235299 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.235317 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.235334 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.235350 | orchestrator | 2025-09-20 10:47:37.235366 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-20 10:47:37.235385 | orchestrator | Saturday 20 September 2025 10:46:53 +0000 (0:00:38.280) 0:01:37.738 **** 2025-09-20 10:47:37.235401 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.235416 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.235432 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.235447 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.235487 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.235504 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.235521 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.235537 | orchestrator | 2025-09-20 10:47:37.235552 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-20 10:47:37.235568 | orchestrator | Saturday 20 September 2025 10:47:25 +0000 (0:00:31.422) 0:02:09.160 **** 2025-09-20 10:47:37.235585 | orchestrator | ok: [testbed-manager] 2025-09-20 10:47:37.235602 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:47:37.235618 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:47:37.235634 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:47:37.235650 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:47:37.235666 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:47:37.235682 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:47:37.235698 | orchestrator | 2025-09-20 10:47:37.235715 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-20 10:47:37.235731 | orchestrator | Saturday 20 September 2025 10:47:27 +0000 (0:00:01.969) 0:02:11.130 **** 2025-09-20 10:47:37.235747 | orchestrator | changed: [testbed-manager] 2025-09-20 10:47:37.235763 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:47:37.235779 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:47:37.235808 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:47:37.235824 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:47:37.235840 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:47:37.235856 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:47:37.235873 | orchestrator | 2025-09-20 10:47:37.235889 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:47:37.235906 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.235923 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.235947 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.235964 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.235980 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.235996 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.236012 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 10:47:37.236029 | orchestrator | 2025-09-20 10:47:37.236045 | orchestrator | 2025-09-20 10:47:37.236062 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:47:37.236078 | orchestrator | Saturday 20 September 2025 10:47:36 +0000 (0:00:09.126) 0:02:20.257 **** 2025-09-20 10:47:37.236094 | orchestrator | =============================================================================== 2025-09-20 10:47:37.236111 | orchestrator | common : Restart fluentd container ------------------------------------- 38.28s 2025-09-20 10:47:37.236127 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.42s 2025-09-20 10:47:37.236143 | orchestrator | common : Restart cron container ----------------------------------------- 9.13s 2025-09-20 10:47:37.236159 | orchestrator | common : Copying over config.json files for services -------------------- 6.69s 2025-09-20 10:47:37.236175 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.94s 2025-09-20 10:47:37.236192 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.79s 2025-09-20 10:47:37.236208 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.52s 2025-09-20 10:47:37.236224 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.02s 2025-09-20 10:47:37.236239 | orchestrator | common : Check common containers ---------------------------------------- 3.91s 2025-09-20 10:47:37.236254 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.84s 2025-09-20 10:47:37.236270 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.71s 2025-09-20 10:47:37.236285 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.49s 2025-09-20 10:47:37.236302 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.42s 2025-09-20 10:47:37.236318 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.20s 2025-09-20 10:47:37.236344 | orchestrator | common : Find custom fluentd filter config files ------------------------ 2.08s 2025-09-20 10:47:37.236360 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2025-09-20 10:47:37.236377 | orchestrator | common : Creating log volume -------------------------------------------- 1.97s 2025-09-20 10:47:37.236392 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.90s 2025-09-20 10:47:37.236419 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.71s 2025-09-20 10:47:37.236434 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.51s 2025-09-20 10:47:37.236451 | orchestrator | 2025-09-20 10:47:37 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:37.236530 | orchestrator | 2025-09-20 10:47:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:40.253257 | orchestrator | 2025-09-20 10:47:40 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:40.253562 | orchestrator | 2025-09-20 10:47:40 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:40.254150 | orchestrator | 2025-09-20 10:47:40 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:40.254958 | orchestrator | 2025-09-20 10:47:40 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:40.255593 | orchestrator | 2025-09-20 10:47:40 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:40.256271 | orchestrator | 2025-09-20 10:47:40 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state STARTED 2025-09-20 10:47:40.256453 | orchestrator | 2025-09-20 10:47:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:43.279720 | orchestrator | 2025-09-20 10:47:43 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:43.280099 | orchestrator | 2025-09-20 10:47:43 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:43.280739 | orchestrator | 2025-09-20 10:47:43 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:43.281375 | orchestrator | 2025-09-20 10:47:43 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:43.282108 | orchestrator | 2025-09-20 10:47:43 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:43.282732 | orchestrator | 2025-09-20 10:47:43 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state STARTED 2025-09-20 10:47:43.282872 | orchestrator | 2025-09-20 10:47:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:46.328426 | orchestrator | 2025-09-20 10:47:46 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:46.328816 | orchestrator | 2025-09-20 10:47:46 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:46.329186 | orchestrator | 2025-09-20 10:47:46 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:46.330104 | orchestrator | 2025-09-20 10:47:46 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:46.330863 | orchestrator | 2025-09-20 10:47:46 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:46.331815 | orchestrator | 2025-09-20 10:47:46 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state STARTED 2025-09-20 10:47:46.331850 | orchestrator | 2025-09-20 10:47:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:49.374368 | orchestrator | 2025-09-20 10:47:49 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:49.374519 | orchestrator | 2025-09-20 10:47:49 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:49.374537 | orchestrator | 2025-09-20 10:47:49 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:49.374549 | orchestrator | 2025-09-20 10:47:49 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:49.374592 | orchestrator | 2025-09-20 10:47:49 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:49.374604 | orchestrator | 2025-09-20 10:47:49 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state STARTED 2025-09-20 10:47:49.374615 | orchestrator | 2025-09-20 10:47:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:52.411180 | orchestrator | 2025-09-20 10:47:52 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:52.411282 | orchestrator | 2025-09-20 10:47:52 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:52.411937 | orchestrator | 2025-09-20 10:47:52 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:52.412508 | orchestrator | 2025-09-20 10:47:52 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:52.413127 | orchestrator | 2025-09-20 10:47:52 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:52.413930 | orchestrator | 2025-09-20 10:47:52 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state STARTED 2025-09-20 10:47:52.413980 | orchestrator | 2025-09-20 10:47:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:55.440659 | orchestrator | 2025-09-20 10:47:55 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:55.442986 | orchestrator | 2025-09-20 10:47:55 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:55.443022 | orchestrator | 2025-09-20 10:47:55 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:55.443735 | orchestrator | 2025-09-20 10:47:55 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:55.444952 | orchestrator | 2025-09-20 10:47:55 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:55.444986 | orchestrator | 2025-09-20 10:47:55 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state STARTED 2025-09-20 10:47:55.445007 | orchestrator | 2025-09-20 10:47:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:47:58.478578 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:47:58.478668 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:47:58.478918 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:47:58.479628 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:47:58.480179 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:47:58.481391 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:47:58.481819 | orchestrator | 2025-09-20 10:47:58 | INFO  | Task 0a09b447-5621-486e-8c1a-256316ec6e35 is in state SUCCESS 2025-09-20 10:47:58.481902 | orchestrator | 2025-09-20 10:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:01.531140 | orchestrator | 2025-09-20 10:48:01 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:01.534871 | orchestrator | 2025-09-20 10:48:01 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:48:01.614406 | orchestrator | 2025-09-20 10:48:01 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:01.614652 | orchestrator | 2025-09-20 10:48:01 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:01.614686 | orchestrator | 2025-09-20 10:48:01 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:01.614706 | orchestrator | 2025-09-20 10:48:01 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:01.614726 | orchestrator | 2025-09-20 10:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:04.590820 | orchestrator | 2025-09-20 10:48:04 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:04.590913 | orchestrator | 2025-09-20 10:48:04 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:48:04.591714 | orchestrator | 2025-09-20 10:48:04 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:04.591792 | orchestrator | 2025-09-20 10:48:04 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:04.591808 | orchestrator | 2025-09-20 10:48:04 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:04.591820 | orchestrator | 2025-09-20 10:48:04 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:04.591832 | orchestrator | 2025-09-20 10:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:07.616127 | orchestrator | 2025-09-20 10:48:07 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:07.616930 | orchestrator | 2025-09-20 10:48:07 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:48:07.618431 | orchestrator | 2025-09-20 10:48:07 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:07.619312 | orchestrator | 2025-09-20 10:48:07 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:07.620290 | orchestrator | 2025-09-20 10:48:07 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:07.621286 | orchestrator | 2025-09-20 10:48:07 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:07.621310 | orchestrator | 2025-09-20 10:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:10.673299 | orchestrator | 2025-09-20 10:48:10 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:10.673406 | orchestrator | 2025-09-20 10:48:10 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state STARTED 2025-09-20 10:48:10.673915 | orchestrator | 2025-09-20 10:48:10 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:10.675820 | orchestrator | 2025-09-20 10:48:10 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:10.675844 | orchestrator | 2025-09-20 10:48:10 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:10.676644 | orchestrator | 2025-09-20 10:48:10 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:10.676754 | orchestrator | 2025-09-20 10:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:13.712145 | orchestrator | 2025-09-20 10:48:13 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:13.712361 | orchestrator | 2025-09-20 10:48:13 | INFO  | Task c423e7a0-e489-40d9-898e-41ad0626af8b is in state SUCCESS 2025-09-20 10:48:13.713879 | orchestrator | 2025-09-20 10:48:13.713933 | orchestrator | 2025-09-20 10:48:13.713946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:48:13.713982 | orchestrator | 2025-09-20 10:48:13.714003 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:48:13.714013 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:00.317) 0:00:00.317 **** 2025-09-20 10:48:13.714059 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:13.714071 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:13.714081 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:13.714090 | orchestrator | 2025-09-20 10:48:13.714100 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:48:13.714110 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:00.536) 0:00:00.855 **** 2025-09-20 10:48:13.714120 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-20 10:48:13.714131 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-20 10:48:13.714140 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-20 10:48:13.714150 | orchestrator | 2025-09-20 10:48:13.714160 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-20 10:48:13.714169 | orchestrator | 2025-09-20 10:48:13.714179 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-20 10:48:13.714203 | orchestrator | Saturday 20 September 2025 10:47:43 +0000 (0:00:00.652) 0:00:01.507 **** 2025-09-20 10:48:13.714213 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:48:13.714233 | orchestrator | 2025-09-20 10:48:13.714243 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-20 10:48:13.714253 | orchestrator | Saturday 20 September 2025 10:47:44 +0000 (0:00:01.074) 0:00:02.582 **** 2025-09-20 10:48:13.714262 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-20 10:48:13.714272 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-20 10:48:13.714282 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-20 10:48:13.714292 | orchestrator | 2025-09-20 10:48:13.714302 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-20 10:48:13.714393 | orchestrator | Saturday 20 September 2025 10:47:45 +0000 (0:00:00.969) 0:00:03.551 **** 2025-09-20 10:48:13.714410 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-20 10:48:13.714420 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-20 10:48:13.714430 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-20 10:48:13.714439 | orchestrator | 2025-09-20 10:48:13.714449 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-20 10:48:13.714459 | orchestrator | Saturday 20 September 2025 10:47:46 +0000 (0:00:01.850) 0:00:05.401 **** 2025-09-20 10:48:13.714489 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:13.714500 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:13.714510 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:13.714519 | orchestrator | 2025-09-20 10:48:13.714529 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-20 10:48:13.714539 | orchestrator | Saturday 20 September 2025 10:47:48 +0000 (0:00:01.814) 0:00:07.216 **** 2025-09-20 10:48:13.714548 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:13.714558 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:13.714568 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:13.714577 | orchestrator | 2025-09-20 10:48:13.714587 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:48:13.714598 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:13.714610 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:13.714620 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:13.714639 | orchestrator | 2025-09-20 10:48:13.714649 | orchestrator | 2025-09-20 10:48:13.714659 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:48:13.714669 | orchestrator | Saturday 20 September 2025 10:47:55 +0000 (0:00:06.578) 0:00:13.795 **** 2025-09-20 10:48:13.714678 | orchestrator | =============================================================================== 2025-09-20 10:48:13.714688 | orchestrator | memcached : Restart memcached container --------------------------------- 6.58s 2025-09-20 10:48:13.714698 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.85s 2025-09-20 10:48:13.714707 | orchestrator | memcached : Check memcached container ----------------------------------- 1.81s 2025-09-20 10:48:13.714717 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.07s 2025-09-20 10:48:13.714726 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.97s 2025-09-20 10:48:13.714736 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-09-20 10:48:13.714746 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-09-20 10:48:13.714755 | orchestrator | 2025-09-20 10:48:13.714765 | orchestrator | 2025-09-20 10:48:13.714774 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:48:13.714784 | orchestrator | 2025-09-20 10:48:13.714794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:48:13.714803 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:00.269) 0:00:00.269 **** 2025-09-20 10:48:13.714813 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:13.714822 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:13.714832 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:13.714842 | orchestrator | 2025-09-20 10:48:13.714852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:48:13.714876 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:00.528) 0:00:00.797 **** 2025-09-20 10:48:13.714886 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-20 10:48:13.714896 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-20 10:48:13.714906 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-20 10:48:13.714916 | orchestrator | 2025-09-20 10:48:13.714926 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-20 10:48:13.715005 | orchestrator | 2025-09-20 10:48:13.715020 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-20 10:48:13.715031 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:00.628) 0:00:01.425 **** 2025-09-20 10:48:13.715041 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:48:13.715051 | orchestrator | 2025-09-20 10:48:13.715062 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-20 10:48:13.715072 | orchestrator | Saturday 20 September 2025 10:47:43 +0000 (0:00:00.628) 0:00:02.053 **** 2025-09-20 10:48:13.715085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715187 | orchestrator | 2025-09-20 10:48:13.715198 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-20 10:48:13.715218 | orchestrator | Saturday 20 September 2025 10:47:45 +0000 (0:00:01.391) 0:00:03.444 **** 2025-09-20 10:48:13.715229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715309 | orchestrator | 2025-09-20 10:48:13.715319 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-20 10:48:13.715329 | orchestrator | Saturday 20 September 2025 10:47:47 +0000 (0:00:02.350) 0:00:05.795 **** 2025-09-20 10:48:13.715339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715413 | orchestrator | 2025-09-20 10:48:13.715429 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-20 10:48:13.715440 | orchestrator | Saturday 20 September 2025 10:47:50 +0000 (0:00:02.831) 0:00:08.626 **** 2025-09-20 10:48:13.715454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 10:48:13.715564 | orchestrator | 2025-09-20 10:48:13.715574 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-20 10:48:13.715584 | orchestrator | Saturday 20 September 2025 10:47:51 +0000 (0:00:01.626) 0:00:10.253 **** 2025-09-20 10:48:13.715594 | orchestrator | 2025-09-20 10:48:13.715604 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-20 10:48:13.715619 | orchestrator | Saturday 20 September 2025 10:47:51 +0000 (0:00:00.110) 0:00:10.363 **** 2025-09-20 10:48:13.715629 | orchestrator | 2025-09-20 10:48:13.715644 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-20 10:48:13.715656 | orchestrator | Saturday 20 September 2025 10:47:52 +0000 (0:00:00.124) 0:00:10.488 **** 2025-09-20 10:48:13.715667 | orchestrator | 2025-09-20 10:48:13.715677 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-20 10:48:13.715688 | orchestrator | Saturday 20 September 2025 10:47:52 +0000 (0:00:00.133) 0:00:10.621 **** 2025-09-20 10:48:13.715699 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:13.715716 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:13.715727 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:13.715738 | orchestrator | 2025-09-20 10:48:13.715748 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-20 10:48:13.715758 | orchestrator | Saturday 20 September 2025 10:48:00 +0000 (0:00:08.174) 0:00:18.796 **** 2025-09-20 10:48:13.715768 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:13.715777 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:13.715869 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:13.715883 | orchestrator | 2025-09-20 10:48:13.715959 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:48:13.715970 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:13.715980 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:13.715990 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:13.716000 | orchestrator | 2025-09-20 10:48:13.716010 | orchestrator | 2025-09-20 10:48:13.716020 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:48:13.716030 | orchestrator | Saturday 20 September 2025 10:48:11 +0000 (0:00:10.771) 0:00:29.567 **** 2025-09-20 10:48:13.716039 | orchestrator | =============================================================================== 2025-09-20 10:48:13.716049 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.78s 2025-09-20 10:48:13.716059 | orchestrator | redis : Restart redis container ----------------------------------------- 8.17s 2025-09-20 10:48:13.716068 | orchestrator | redis : Copying over redis config files --------------------------------- 2.83s 2025-09-20 10:48:13.716078 | orchestrator | redis : Copying over default config.json files -------------------------- 2.35s 2025-09-20 10:48:13.716088 | orchestrator | redis : Check redis containers ------------------------------------------ 1.63s 2025-09-20 10:48:13.716097 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.39s 2025-09-20 10:48:13.716107 | orchestrator | redis : include_tasks --------------------------------------------------- 0.63s 2025-09-20 10:48:13.716117 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-20 10:48:13.716126 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-09-20 10:48:13.716136 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2025-09-20 10:48:13.716146 | orchestrator | 2025-09-20 10:48:13 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:13.716156 | orchestrator | 2025-09-20 10:48:13 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:13.716166 | orchestrator | 2025-09-20 10:48:13 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:13.716181 | orchestrator | 2025-09-20 10:48:13 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:13.716191 | orchestrator | 2025-09-20 10:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:16.754269 | orchestrator | 2025-09-20 10:48:16 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:16.756403 | orchestrator | 2025-09-20 10:48:16 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:16.756882 | orchestrator | 2025-09-20 10:48:16 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:16.757449 | orchestrator | 2025-09-20 10:48:16 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:16.758254 | orchestrator | 2025-09-20 10:48:16 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:16.758338 | orchestrator | 2025-09-20 10:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:19.789894 | orchestrator | 2025-09-20 10:48:19 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:19.790244 | orchestrator | 2025-09-20 10:48:19 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:19.791120 | orchestrator | 2025-09-20 10:48:19 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:19.792022 | orchestrator | 2025-09-20 10:48:19 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:19.793829 | orchestrator | 2025-09-20 10:48:19 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:19.793869 | orchestrator | 2025-09-20 10:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:22.834336 | orchestrator | 2025-09-20 10:48:22 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:22.834443 | orchestrator | 2025-09-20 10:48:22 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:22.836161 | orchestrator | 2025-09-20 10:48:22 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:22.836189 | orchestrator | 2025-09-20 10:48:22 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:22.836201 | orchestrator | 2025-09-20 10:48:22 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:22.836213 | orchestrator | 2025-09-20 10:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:25.936036 | orchestrator | 2025-09-20 10:48:25 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:25.936142 | orchestrator | 2025-09-20 10:48:25 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:25.936800 | orchestrator | 2025-09-20 10:48:25 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:25.937320 | orchestrator | 2025-09-20 10:48:25 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:25.937945 | orchestrator | 2025-09-20 10:48:25 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:25.938462 | orchestrator | 2025-09-20 10:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:29.006563 | orchestrator | 2025-09-20 10:48:29 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:29.006666 | orchestrator | 2025-09-20 10:48:29 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:29.007201 | orchestrator | 2025-09-20 10:48:29 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:29.007849 | orchestrator | 2025-09-20 10:48:29 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:29.009370 | orchestrator | 2025-09-20 10:48:29 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:29.009426 | orchestrator | 2025-09-20 10:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:32.056583 | orchestrator | 2025-09-20 10:48:32 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:32.056711 | orchestrator | 2025-09-20 10:48:32 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:32.056735 | orchestrator | 2025-09-20 10:48:32 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:32.056789 | orchestrator | 2025-09-20 10:48:32 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:32.056807 | orchestrator | 2025-09-20 10:48:32 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:32.056825 | orchestrator | 2025-09-20 10:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:35.083568 | orchestrator | 2025-09-20 10:48:35 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:35.083661 | orchestrator | 2025-09-20 10:48:35 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:35.084521 | orchestrator | 2025-09-20 10:48:35 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:35.084542 | orchestrator | 2025-09-20 10:48:35 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:35.085207 | orchestrator | 2025-09-20 10:48:35 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:35.085225 | orchestrator | 2025-09-20 10:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:38.120709 | orchestrator | 2025-09-20 10:48:38 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:38.122797 | orchestrator | 2025-09-20 10:48:38 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:38.123250 | orchestrator | 2025-09-20 10:48:38 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:38.125513 | orchestrator | 2025-09-20 10:48:38 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:38.126516 | orchestrator | 2025-09-20 10:48:38 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state STARTED 2025-09-20 10:48:38.126541 | orchestrator | 2025-09-20 10:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:41.165131 | orchestrator | 2025-09-20 10:48:41 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:41.167181 | orchestrator | 2025-09-20 10:48:41 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:41.167622 | orchestrator | 2025-09-20 10:48:41 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:41.168379 | orchestrator | 2025-09-20 10:48:41 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:41.169729 | orchestrator | 2025-09-20 10:48:41.169760 | orchestrator | 2025-09-20 10:48:41 | INFO  | Task 48b4c712-03e6-426c-9181-27cd9e465635 is in state SUCCESS 2025-09-20 10:48:41.171179 | orchestrator | 2025-09-20 10:48:41.171217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:48:41.171230 | orchestrator | 2025-09-20 10:48:41.171242 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:48:41.171254 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-20 10:48:41.171265 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:41.171277 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:41.171288 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:41.171299 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:41.171309 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:41.171320 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:41.171330 | orchestrator | 2025-09-20 10:48:41.171341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:48:41.171352 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:00.602) 0:00:00.819 **** 2025-09-20 10:48:41.171364 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 10:48:41.171402 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 10:48:41.171414 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 10:48:41.171424 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 10:48:41.171435 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 10:48:41.171446 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 10:48:41.171457 | orchestrator | 2025-09-20 10:48:41.171467 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-20 10:48:41.171509 | orchestrator | 2025-09-20 10:48:41.171520 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-20 10:48:41.171531 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:00.973) 0:00:01.792 **** 2025-09-20 10:48:41.171543 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:48:41.171555 | orchestrator | 2025-09-20 10:48:41.171566 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-20 10:48:41.171578 | orchestrator | Saturday 20 September 2025 10:47:44 +0000 (0:00:01.522) 0:00:03.315 **** 2025-09-20 10:48:41.171589 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-20 10:48:41.171600 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-20 10:48:41.171611 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-20 10:48:41.171621 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-20 10:48:41.171632 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-20 10:48:41.171643 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-20 10:48:41.171653 | orchestrator | 2025-09-20 10:48:41.171664 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-20 10:48:41.171675 | orchestrator | Saturday 20 September 2025 10:47:45 +0000 (0:00:01.490) 0:00:04.805 **** 2025-09-20 10:48:41.171686 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-20 10:48:41.171697 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-20 10:48:41.171708 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-20 10:48:41.171718 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-20 10:48:41.171729 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-20 10:48:41.171740 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-20 10:48:41.171750 | orchestrator | 2025-09-20 10:48:41.171761 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-20 10:48:41.171773 | orchestrator | Saturday 20 September 2025 10:47:47 +0000 (0:00:01.426) 0:00:06.231 **** 2025-09-20 10:48:41.171785 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-20 10:48:41.171797 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:41.171810 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-20 10:48:41.171821 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:41.171833 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-20 10:48:41.171845 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:41.171856 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-20 10:48:41.171868 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:41.171879 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-20 10:48:41.171891 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:41.171903 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-20 10:48:41.171930 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:41.171942 | orchestrator | 2025-09-20 10:48:41.171954 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-20 10:48:41.171974 | orchestrator | Saturday 20 September 2025 10:47:48 +0000 (0:00:01.493) 0:00:07.724 **** 2025-09-20 10:48:41.171986 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:41.171998 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:41.172010 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:41.172021 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:41.172033 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:41.172044 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:41.172056 | orchestrator | 2025-09-20 10:48:41.172068 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-20 10:48:41.172080 | orchestrator | Saturday 20 September 2025 10:47:49 +0000 (0:00:00.981) 0:00:08.705 **** 2025-09-20 10:48:41.172112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172296 | orchestrator | 2025-09-20 10:48:41.172307 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-20 10:48:41.172318 | orchestrator | Saturday 20 September 2025 10:47:51 +0000 (0:00:01.884) 0:00:10.589 **** 2025-09-20 10:48:41.172330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172448 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172546 | orchestrator | 2025-09-20 10:48:41.172558 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-20 10:48:41.172568 | orchestrator | Saturday 20 September 2025 10:47:54 +0000 (0:00:02.699) 0:00:13.289 **** 2025-09-20 10:48:41.172579 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:41.172590 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:41.172601 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:41.172612 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:41.172622 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:41.172633 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:41.172644 | orchestrator | 2025-09-20 10:48:41.172655 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-20 10:48:41.172665 | orchestrator | Saturday 20 September 2025 10:47:55 +0000 (0:00:00.858) 0:00:14.147 **** 2025-09-20 10:48:41.172676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 10:48:41.172854 | orchestrator | 2025-09-20 10:48:41.172866 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 10:48:41.172876 | orchestrator | Saturday 20 September 2025 10:47:57 +0000 (0:00:02.406) 0:00:16.554 **** 2025-09-20 10:48:41.172887 | orchestrator | 2025-09-20 10:48:41.172898 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 10:48:41.172909 | orchestrator | Saturday 20 September 2025 10:47:57 +0000 (0:00:00.266) 0:00:16.820 **** 2025-09-20 10:48:41.172927 | orchestrator | 2025-09-20 10:48:41.172937 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 10:48:41.172948 | orchestrator | Saturday 20 September 2025 10:47:57 +0000 (0:00:00.128) 0:00:16.949 **** 2025-09-20 10:48:41.172959 | orchestrator | 2025-09-20 10:48:41.172970 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 10:48:41.172980 | orchestrator | Saturday 20 September 2025 10:47:58 +0000 (0:00:00.127) 0:00:17.076 **** 2025-09-20 10:48:41.172991 | orchestrator | 2025-09-20 10:48:41.173002 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 10:48:41.173013 | orchestrator | Saturday 20 September 2025 10:47:58 +0000 (0:00:00.133) 0:00:17.210 **** 2025-09-20 10:48:41.173023 | orchestrator | 2025-09-20 10:48:41.173034 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 10:48:41.173045 | orchestrator | Saturday 20 September 2025 10:47:58 +0000 (0:00:00.121) 0:00:17.331 **** 2025-09-20 10:48:41.173056 | orchestrator | 2025-09-20 10:48:41.173066 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-20 10:48:41.173077 | orchestrator | Saturday 20 September 2025 10:47:58 +0000 (0:00:00.121) 0:00:17.453 **** 2025-09-20 10:48:41.173088 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:41.173098 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:41.173109 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:41.173120 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:41.173131 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:41.173141 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:41.173152 | orchestrator | 2025-09-20 10:48:41.173163 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-20 10:48:41.173174 | orchestrator | Saturday 20 September 2025 10:48:10 +0000 (0:00:11.919) 0:00:29.372 **** 2025-09-20 10:48:41.173185 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:41.173196 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:41.173206 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:41.173217 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:41.173228 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:41.173238 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:41.173249 | orchestrator | 2025-09-20 10:48:41.173260 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-20 10:48:41.173271 | orchestrator | Saturday 20 September 2025 10:48:13 +0000 (0:00:02.966) 0:00:32.339 **** 2025-09-20 10:48:41.173282 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:41.173293 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:41.173303 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:41.173314 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:41.173325 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:41.173336 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:41.173346 | orchestrator | 2025-09-20 10:48:41.173362 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-20 10:48:41.173373 | orchestrator | Saturday 20 September 2025 10:48:16 +0000 (0:00:03.561) 0:00:35.900 **** 2025-09-20 10:48:41.173384 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-20 10:48:41.173396 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-20 10:48:41.173406 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-20 10:48:41.173417 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-20 10:48:41.173428 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-20 10:48:41.173444 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-20 10:48:41.173462 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-20 10:48:41.173502 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-20 10:48:41.173514 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-20 10:48:41.173525 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-20 10:48:41.173536 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-20 10:48:41.173547 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-20 10:48:41.173557 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 10:48:41.173568 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 10:48:41.173578 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 10:48:41.173589 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 10:48:41.173600 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 10:48:41.173610 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 10:48:41.173621 | orchestrator | 2025-09-20 10:48:41.173632 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-20 10:48:41.173643 | orchestrator | Saturday 20 September 2025 10:48:24 +0000 (0:00:07.192) 0:00:43.093 **** 2025-09-20 10:48:41.173654 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-20 10:48:41.173665 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:41.173675 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-20 10:48:41.173686 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:41.173697 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-20 10:48:41.173707 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:41.173718 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-20 10:48:41.173729 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-20 10:48:41.173740 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-20 10:48:41.173750 | orchestrator | 2025-09-20 10:48:41.173761 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-20 10:48:41.173772 | orchestrator | Saturday 20 September 2025 10:48:26 +0000 (0:00:02.869) 0:00:45.962 **** 2025-09-20 10:48:41.173783 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-20 10:48:41.173794 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:41.173805 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-20 10:48:41.173815 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:41.173826 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-20 10:48:41.173837 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:41.173847 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-20 10:48:41.173858 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-20 10:48:41.173869 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-20 10:48:41.173880 | orchestrator | 2025-09-20 10:48:41.173891 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-20 10:48:41.173902 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:04.242) 0:00:50.205 **** 2025-09-20 10:48:41.173912 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:41.173930 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:41.173941 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:41.173952 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:41.173963 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:41.173974 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:41.173984 | orchestrator | 2025-09-20 10:48:41.173995 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:48:41.174011 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:48:41.174079 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:48:41.174091 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:48:41.174103 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:48:41.174113 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:48:41.174132 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:48:41.174143 | orchestrator | 2025-09-20 10:48:41.174154 | orchestrator | 2025-09-20 10:48:41.174165 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:48:41.174176 | orchestrator | Saturday 20 September 2025 10:48:39 +0000 (0:00:08.279) 0:00:58.485 **** 2025-09-20 10:48:41.174200 | orchestrator | =============================================================================== 2025-09-20 10:48:41.174212 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.92s 2025-09-20 10:48:41.174234 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.84s 2025-09-20 10:48:41.174245 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.19s 2025-09-20 10:48:41.174256 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.24s 2025-09-20 10:48:41.174267 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.97s 2025-09-20 10:48:41.174277 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.87s 2025-09-20 10:48:41.174288 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.70s 2025-09-20 10:48:41.174299 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.41s 2025-09-20 10:48:41.174309 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.88s 2025-09-20 10:48:41.174320 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.52s 2025-09-20 10:48:41.174331 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.49s 2025-09-20 10:48:41.174341 | orchestrator | module-load : Load modules ---------------------------------------------- 1.49s 2025-09-20 10:48:41.174352 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.43s 2025-09-20 10:48:41.174363 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.98s 2025-09-20 10:48:41.174374 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-09-20 10:48:41.174385 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.90s 2025-09-20 10:48:41.174395 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.86s 2025-09-20 10:48:41.174406 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2025-09-20 10:48:41.174417 | orchestrator | 2025-09-20 10:48:41 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:41.174435 | orchestrator | 2025-09-20 10:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:44.208257 | orchestrator | 2025-09-20 10:48:44 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:44.208951 | orchestrator | 2025-09-20 10:48:44 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:44.209960 | orchestrator | 2025-09-20 10:48:44 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:44.212192 | orchestrator | 2025-09-20 10:48:44 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:44.212230 | orchestrator | 2025-09-20 10:48:44 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:44.212243 | orchestrator | 2025-09-20 10:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:47.334372 | orchestrator | 2025-09-20 10:48:47 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:47.334541 | orchestrator | 2025-09-20 10:48:47 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:47.334560 | orchestrator | 2025-09-20 10:48:47 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:47.334574 | orchestrator | 2025-09-20 10:48:47 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:47.334605 | orchestrator | 2025-09-20 10:48:47 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:47.334618 | orchestrator | 2025-09-20 10:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:50.354202 | orchestrator | 2025-09-20 10:48:50 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:50.355646 | orchestrator | 2025-09-20 10:48:50 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:50.359469 | orchestrator | 2025-09-20 10:48:50 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:50.360677 | orchestrator | 2025-09-20 10:48:50 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:50.360803 | orchestrator | 2025-09-20 10:48:50 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:50.360945 | orchestrator | 2025-09-20 10:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:53.427144 | orchestrator | 2025-09-20 10:48:53 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:53.427342 | orchestrator | 2025-09-20 10:48:53 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state STARTED 2025-09-20 10:48:53.427800 | orchestrator | 2025-09-20 10:48:53 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:53.428946 | orchestrator | 2025-09-20 10:48:53 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:53.429551 | orchestrator | 2025-09-20 10:48:53 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:53.429574 | orchestrator | 2025-09-20 10:48:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:56.517150 | orchestrator | 2025-09-20 10:48:56 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:56.517740 | orchestrator | 2025-09-20 10:48:56 | INFO  | Task aacd9d5b-600a-41c3-815c-d62901f79cd7 is in state SUCCESS 2025-09-20 10:48:56.518806 | orchestrator | 2025-09-20 10:48:56.518834 | orchestrator | 2025-09-20 10:48:56.518843 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-20 10:48:56.518877 | orchestrator | 2025-09-20 10:48:56.518886 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-20 10:48:56.518895 | orchestrator | Saturday 20 September 2025 10:45:16 +0000 (0:00:00.157) 0:00:00.157 **** 2025-09-20 10:48:56.518903 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.518913 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.518921 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.518929 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.519043 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.519068 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.519077 | orchestrator | 2025-09-20 10:48:56.519085 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-20 10:48:56.519094 | orchestrator | Saturday 20 September 2025 10:45:17 +0000 (0:00:00.809) 0:00:00.967 **** 2025-09-20 10:48:56.519102 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.519111 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.519119 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.519127 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.519135 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.519143 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.519151 | orchestrator | 2025-09-20 10:48:56.519159 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-20 10:48:56.519167 | orchestrator | Saturday 20 September 2025 10:45:18 +0000 (0:00:00.627) 0:00:01.595 **** 2025-09-20 10:48:56.519175 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.519183 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.519191 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.519240 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.519251 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.519259 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.519267 | orchestrator | 2025-09-20 10:48:56.519275 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-20 10:48:56.519283 | orchestrator | Saturday 20 September 2025 10:45:19 +0000 (0:00:00.763) 0:00:02.358 **** 2025-09-20 10:48:56.519292 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.519300 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.519307 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.519315 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.519323 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.519331 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.519339 | orchestrator | 2025-09-20 10:48:56.519347 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-20 10:48:56.519355 | orchestrator | Saturday 20 September 2025 10:45:20 +0000 (0:00:01.953) 0:00:04.311 **** 2025-09-20 10:48:56.519363 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.519371 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.519379 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.519387 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.519395 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.519402 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.519410 | orchestrator | 2025-09-20 10:48:56.519418 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-20 10:48:56.519426 | orchestrator | Saturday 20 September 2025 10:45:22 +0000 (0:00:01.215) 0:00:05.527 **** 2025-09-20 10:48:56.519434 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.519442 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.519450 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.519458 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.519512 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.519522 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.519530 | orchestrator | 2025-09-20 10:48:56.519538 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-20 10:48:56.519546 | orchestrator | Saturday 20 September 2025 10:45:24 +0000 (0:00:01.955) 0:00:07.482 **** 2025-09-20 10:48:56.519566 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.519575 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.519583 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.519590 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.519598 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.519606 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.519614 | orchestrator | 2025-09-20 10:48:56.519622 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-20 10:48:56.519630 | orchestrator | Saturday 20 September 2025 10:45:24 +0000 (0:00:00.734) 0:00:08.217 **** 2025-09-20 10:48:56.519638 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.519646 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.519654 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.519661 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.519669 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.519677 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.519684 | orchestrator | 2025-09-20 10:48:56.519692 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-20 10:48:56.519700 | orchestrator | Saturday 20 September 2025 10:45:25 +0000 (0:00:00.822) 0:00:09.039 **** 2025-09-20 10:48:56.519708 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:48:56.519716 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:48:56.519724 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.519732 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:48:56.519740 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:48:56.519748 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.519756 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:48:56.519764 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:48:56.519771 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.519779 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:48:56.519797 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:48:56.519805 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.519813 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:48:56.519821 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:48:56.519829 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.519837 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:48:56.519845 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:48:56.519852 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.519860 | orchestrator | 2025-09-20 10:48:56.519868 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-20 10:48:56.519876 | orchestrator | Saturday 20 September 2025 10:45:26 +0000 (0:00:00.749) 0:00:09.789 **** 2025-09-20 10:48:56.519886 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.519894 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.519903 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.519911 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.519920 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.519929 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.519937 | orchestrator | 2025-09-20 10:48:56.519946 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-20 10:48:56.519956 | orchestrator | Saturday 20 September 2025 10:45:27 +0000 (0:00:01.296) 0:00:11.085 **** 2025-09-20 10:48:56.519965 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.519974 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.519989 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.519997 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520004 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520012 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520020 | orchestrator | 2025-09-20 10:48:56.520028 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-20 10:48:56.520036 | orchestrator | Saturday 20 September 2025 10:45:28 +0000 (0:00:01.207) 0:00:12.292 **** 2025-09-20 10:48:56.520044 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.520052 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.520060 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.520068 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.520075 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.520083 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.520091 | orchestrator | 2025-09-20 10:48:56.520099 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-20 10:48:56.520107 | orchestrator | Saturday 20 September 2025 10:45:34 +0000 (0:00:05.507) 0:00:17.799 **** 2025-09-20 10:48:56.520115 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.520123 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.520130 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.520138 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.520146 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.520154 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.520162 | orchestrator | 2025-09-20 10:48:56.520170 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-20 10:48:56.520178 | orchestrator | Saturday 20 September 2025 10:45:36 +0000 (0:00:01.561) 0:00:19.361 **** 2025-09-20 10:48:56.520186 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.520193 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.520201 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.520209 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.520222 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.520230 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.520238 | orchestrator | 2025-09-20 10:48:56.520246 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-20 10:48:56.520256 | orchestrator | Saturday 20 September 2025 10:45:38 +0000 (0:00:02.788) 0:00:22.150 **** 2025-09-20 10:48:56.520264 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.520272 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.520279 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.520287 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520295 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520303 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520310 | orchestrator | 2025-09-20 10:48:56.520318 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-20 10:48:56.520326 | orchestrator | Saturday 20 September 2025 10:45:40 +0000 (0:00:01.445) 0:00:23.596 **** 2025-09-20 10:48:56.520334 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-20 10:48:56.520343 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-20 10:48:56.520351 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-20 10:48:56.520359 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-20 10:48:56.520367 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-20 10:48:56.520375 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-20 10:48:56.520382 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-20 10:48:56.520390 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-20 10:48:56.520398 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-20 10:48:56.520406 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-20 10:48:56.520414 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-20 10:48:56.520426 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-20 10:48:56.520434 | orchestrator | 2025-09-20 10:48:56.520442 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-20 10:48:56.520450 | orchestrator | Saturday 20 September 2025 10:45:43 +0000 (0:00:02.823) 0:00:26.419 **** 2025-09-20 10:48:56.520458 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.520466 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.520489 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.520497 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.520505 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.520513 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.520521 | orchestrator | 2025-09-20 10:48:56.520536 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-20 10:48:56.520544 | orchestrator | 2025-09-20 10:48:56.520552 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-20 10:48:56.520560 | orchestrator | Saturday 20 September 2025 10:45:45 +0000 (0:00:02.177) 0:00:28.596 **** 2025-09-20 10:48:56.520568 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520576 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520584 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520592 | orchestrator | 2025-09-20 10:48:56.520600 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-20 10:48:56.520608 | orchestrator | Saturday 20 September 2025 10:45:46 +0000 (0:00:01.273) 0:00:29.870 **** 2025-09-20 10:48:56.520616 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520624 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520631 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520639 | orchestrator | 2025-09-20 10:48:56.520647 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-20 10:48:56.520655 | orchestrator | Saturday 20 September 2025 10:45:48 +0000 (0:00:01.538) 0:00:31.409 **** 2025-09-20 10:48:56.520663 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520670 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520678 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520686 | orchestrator | 2025-09-20 10:48:56.520694 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-20 10:48:56.520702 | orchestrator | Saturday 20 September 2025 10:45:49 +0000 (0:00:01.380) 0:00:32.790 **** 2025-09-20 10:48:56.520710 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520718 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520725 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520733 | orchestrator | 2025-09-20 10:48:56.520741 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-20 10:48:56.520749 | orchestrator | Saturday 20 September 2025 10:45:50 +0000 (0:00:01.206) 0:00:33.996 **** 2025-09-20 10:48:56.520757 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.520764 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.520773 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.520781 | orchestrator | 2025-09-20 10:48:56.520788 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-20 10:48:56.520796 | orchestrator | Saturday 20 September 2025 10:45:51 +0000 (0:00:00.358) 0:00:34.355 **** 2025-09-20 10:48:56.520804 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520812 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520820 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520828 | orchestrator | 2025-09-20 10:48:56.520836 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-20 10:48:56.520844 | orchestrator | Saturday 20 September 2025 10:45:51 +0000 (0:00:00.554) 0:00:34.909 **** 2025-09-20 10:48:56.520851 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.520859 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.520867 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.520875 | orchestrator | 2025-09-20 10:48:56.520883 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-20 10:48:56.520896 | orchestrator | Saturday 20 September 2025 10:45:53 +0000 (0:00:01.536) 0:00:36.446 **** 2025-09-20 10:48:56.520904 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:48:56.520912 | orchestrator | 2025-09-20 10:48:56.520921 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-20 10:48:56.520929 | orchestrator | Saturday 20 September 2025 10:45:54 +0000 (0:00:01.192) 0:00:37.638 **** 2025-09-20 10:48:56.520941 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.520949 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.520956 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.520964 | orchestrator | 2025-09-20 10:48:56.520972 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-20 10:48:56.520980 | orchestrator | Saturday 20 September 2025 10:45:56 +0000 (0:00:01.965) 0:00:39.603 **** 2025-09-20 10:48:56.520988 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.520996 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.521004 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521012 | orchestrator | 2025-09-20 10:48:56.521020 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-20 10:48:56.521028 | orchestrator | Saturday 20 September 2025 10:45:56 +0000 (0:00:00.571) 0:00:40.175 **** 2025-09-20 10:48:56.521035 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.521043 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521051 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.521059 | orchestrator | 2025-09-20 10:48:56.521067 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-20 10:48:56.521075 | orchestrator | Saturday 20 September 2025 10:45:58 +0000 (0:00:01.715) 0:00:41.891 **** 2025-09-20 10:48:56.521083 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.521091 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.521099 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521106 | orchestrator | 2025-09-20 10:48:56.521114 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-20 10:48:56.521122 | orchestrator | Saturday 20 September 2025 10:46:00 +0000 (0:00:01.587) 0:00:43.478 **** 2025-09-20 10:48:56.521130 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.521138 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.521146 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.521153 | orchestrator | 2025-09-20 10:48:56.521162 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-20 10:48:56.521170 | orchestrator | Saturday 20 September 2025 10:46:01 +0000 (0:00:00.890) 0:00:44.369 **** 2025-09-20 10:48:56.521178 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.521186 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.521193 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.521201 | orchestrator | 2025-09-20 10:48:56.521209 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-20 10:48:56.521217 | orchestrator | Saturday 20 September 2025 10:46:01 +0000 (0:00:00.315) 0:00:44.685 **** 2025-09-20 10:48:56.521225 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521233 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521241 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521249 | orchestrator | 2025-09-20 10:48:56.521261 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-20 10:48:56.521270 | orchestrator | Saturday 20 September 2025 10:46:03 +0000 (0:00:01.875) 0:00:46.561 **** 2025-09-20 10:48:56.521278 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-20 10:48:56.521286 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-20 10:48:56.521294 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-20 10:48:56.521309 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-20 10:48:56.521317 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-20 10:48:56.521325 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-20 10:48:56.521333 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-20 10:48:56.521341 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-20 10:48:56.521349 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-20 10:48:56.521357 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-20 10:48:56.521365 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-20 10:48:56.521372 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-20 10:48:56.521380 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-20 10:48:56.521388 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.521396 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.521404 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.521412 | orchestrator | 2025-09-20 10:48:56.521420 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-20 10:48:56.521428 | orchestrator | Saturday 20 September 2025 10:46:57 +0000 (0:00:54.570) 0:01:41.131 **** 2025-09-20 10:48:56.521440 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.521448 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.521456 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.521463 | orchestrator | 2025-09-20 10:48:56.521471 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-20 10:48:56.521525 | orchestrator | Saturday 20 September 2025 10:46:58 +0000 (0:00:00.323) 0:01:41.455 **** 2025-09-20 10:48:56.521533 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521541 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521549 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521557 | orchestrator | 2025-09-20 10:48:56.521565 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-20 10:48:56.521573 | orchestrator | Saturday 20 September 2025 10:46:59 +0000 (0:00:01.090) 0:01:42.546 **** 2025-09-20 10:48:56.521581 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521589 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521597 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521605 | orchestrator | 2025-09-20 10:48:56.521613 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-20 10:48:56.521621 | orchestrator | Saturday 20 September 2025 10:47:00 +0000 (0:00:01.358) 0:01:43.904 **** 2025-09-20 10:48:56.521629 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521637 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521645 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521652 | orchestrator | 2025-09-20 10:48:56.521661 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-20 10:48:56.521669 | orchestrator | Saturday 20 September 2025 10:47:27 +0000 (0:00:26.637) 0:02:10.541 **** 2025-09-20 10:48:56.521676 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.521692 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.521700 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.521708 | orchestrator | 2025-09-20 10:48:56.521716 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-20 10:48:56.521724 | orchestrator | Saturday 20 September 2025 10:47:27 +0000 (0:00:00.601) 0:02:11.143 **** 2025-09-20 10:48:56.521732 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.521740 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.521748 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.521755 | orchestrator | 2025-09-20 10:48:56.521764 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-20 10:48:56.521772 | orchestrator | Saturday 20 September 2025 10:47:28 +0000 (0:00:00.651) 0:02:11.795 **** 2025-09-20 10:48:56.521780 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521788 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521795 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521803 | orchestrator | 2025-09-20 10:48:56.521816 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-20 10:48:56.521824 | orchestrator | Saturday 20 September 2025 10:47:29 +0000 (0:00:00.652) 0:02:12.447 **** 2025-09-20 10:48:56.521832 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.521840 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.521848 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.521856 | orchestrator | 2025-09-20 10:48:56.521864 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-20 10:48:56.521872 | orchestrator | Saturday 20 September 2025 10:47:29 +0000 (0:00:00.831) 0:02:13.278 **** 2025-09-20 10:48:56.521880 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.521888 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.521895 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.521903 | orchestrator | 2025-09-20 10:48:56.521911 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-20 10:48:56.521920 | orchestrator | Saturday 20 September 2025 10:47:30 +0000 (0:00:00.251) 0:02:13.529 **** 2025-09-20 10:48:56.521928 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521936 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521944 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521952 | orchestrator | 2025-09-20 10:48:56.521960 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-20 10:48:56.521967 | orchestrator | Saturday 20 September 2025 10:47:30 +0000 (0:00:00.632) 0:02:14.161 **** 2025-09-20 10:48:56.521973 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.521980 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.521987 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.521993 | orchestrator | 2025-09-20 10:48:56.522000 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-20 10:48:56.522007 | orchestrator | Saturday 20 September 2025 10:47:31 +0000 (0:00:00.626) 0:02:14.788 **** 2025-09-20 10:48:56.522014 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.522063 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.522071 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.522077 | orchestrator | 2025-09-20 10:48:56.522084 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-20 10:48:56.522091 | orchestrator | Saturday 20 September 2025 10:47:32 +0000 (0:00:00.946) 0:02:15.735 **** 2025-09-20 10:48:56.522097 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:48:56.522104 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:48:56.522111 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:48:56.522117 | orchestrator | 2025-09-20 10:48:56.522124 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-20 10:48:56.522131 | orchestrator | Saturday 20 September 2025 10:47:33 +0000 (0:00:00.842) 0:02:16.577 **** 2025-09-20 10:48:56.522138 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.522144 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.522151 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.522163 | orchestrator | 2025-09-20 10:48:56.522170 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-20 10:48:56.522176 | orchestrator | Saturday 20 September 2025 10:47:33 +0000 (0:00:00.274) 0:02:16.852 **** 2025-09-20 10:48:56.522183 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.522190 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.522196 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.522203 | orchestrator | 2025-09-20 10:48:56.522210 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-20 10:48:56.522216 | orchestrator | Saturday 20 September 2025 10:47:33 +0000 (0:00:00.310) 0:02:17.163 **** 2025-09-20 10:48:56.522223 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.522230 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.522237 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.522243 | orchestrator | 2025-09-20 10:48:56.522250 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-20 10:48:56.522257 | orchestrator | Saturday 20 September 2025 10:47:34 +0000 (0:00:00.780) 0:02:17.943 **** 2025-09-20 10:48:56.522264 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.522271 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.522278 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.522284 | orchestrator | 2025-09-20 10:48:56.522291 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-20 10:48:56.522298 | orchestrator | Saturday 20 September 2025 10:47:35 +0000 (0:00:00.609) 0:02:18.553 **** 2025-09-20 10:48:56.522305 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-20 10:48:56.522312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-20 10:48:56.522319 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-20 10:48:56.522326 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-20 10:48:56.522333 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-20 10:48:56.522339 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-20 10:48:56.522346 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-20 10:48:56.522353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-20 10:48:56.522360 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-20 10:48:56.522366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-20 10:48:56.522373 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-20 10:48:56.522380 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-20 10:48:56.522392 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-20 10:48:56.522399 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-20 10:48:56.522405 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-20 10:48:56.522412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-20 10:48:56.522419 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-20 10:48:56.522426 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-20 10:48:56.522432 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-20 10:48:56.522439 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-20 10:48:56.522450 | orchestrator | 2025-09-20 10:48:56.522457 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-20 10:48:56.522464 | orchestrator | 2025-09-20 10:48:56.522470 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-20 10:48:56.522490 | orchestrator | Saturday 20 September 2025 10:47:38 +0000 (0:00:02.801) 0:02:21.354 **** 2025-09-20 10:48:56.522497 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.522504 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.522510 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.522517 | orchestrator | 2025-09-20 10:48:56.523021 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-20 10:48:56.523034 | orchestrator | Saturday 20 September 2025 10:47:38 +0000 (0:00:00.470) 0:02:21.825 **** 2025-09-20 10:48:56.523041 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.523048 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.523055 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.523061 | orchestrator | 2025-09-20 10:48:56.523068 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-20 10:48:56.523075 | orchestrator | Saturday 20 September 2025 10:47:39 +0000 (0:00:00.590) 0:02:22.415 **** 2025-09-20 10:48:56.523082 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.523089 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.523095 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.523102 | orchestrator | 2025-09-20 10:48:56.523108 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-20 10:48:56.523115 | orchestrator | Saturday 20 September 2025 10:47:39 +0000 (0:00:00.259) 0:02:22.675 **** 2025-09-20 10:48:56.523122 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:48:56.523129 | orchestrator | 2025-09-20 10:48:56.523136 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-20 10:48:56.523142 | orchestrator | Saturday 20 September 2025 10:47:39 +0000 (0:00:00.539) 0:02:23.214 **** 2025-09-20 10:48:56.523149 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.523156 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.523163 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.523170 | orchestrator | 2025-09-20 10:48:56.523176 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-20 10:48:56.523183 | orchestrator | Saturday 20 September 2025 10:47:40 +0000 (0:00:00.277) 0:02:23.492 **** 2025-09-20 10:48:56.523190 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.523197 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.523203 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.523210 | orchestrator | 2025-09-20 10:48:56.523217 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-20 10:48:56.523224 | orchestrator | Saturday 20 September 2025 10:47:40 +0000 (0:00:00.265) 0:02:23.758 **** 2025-09-20 10:48:56.523231 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.523237 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.523244 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.523251 | orchestrator | 2025-09-20 10:48:56.523257 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-20 10:48:56.523264 | orchestrator | Saturday 20 September 2025 10:47:40 +0000 (0:00:00.270) 0:02:24.029 **** 2025-09-20 10:48:56.523271 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.523277 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.523284 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.523291 | orchestrator | 2025-09-20 10:48:56.523298 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-20 10:48:56.523304 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:00.690) 0:02:24.719 **** 2025-09-20 10:48:56.523311 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.523318 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.523330 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.523337 | orchestrator | 2025-09-20 10:48:56.523344 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-20 10:48:56.523351 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:01.011) 0:02:25.731 **** 2025-09-20 10:48:56.523357 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.523364 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.523371 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.523377 | orchestrator | 2025-09-20 10:48:56.523384 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-20 10:48:56.523391 | orchestrator | Saturday 20 September 2025 10:47:43 +0000 (0:00:01.183) 0:02:26.914 **** 2025-09-20 10:48:56.523398 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:48:56.523405 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:48:56.523411 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:48:56.523418 | orchestrator | 2025-09-20 10:48:56.523425 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-20 10:48:56.523431 | orchestrator | 2025-09-20 10:48:56.523438 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-20 10:48:56.523445 | orchestrator | Saturday 20 September 2025 10:47:55 +0000 (0:00:11.826) 0:02:38.741 **** 2025-09-20 10:48:56.523452 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.523459 | orchestrator | 2025-09-20 10:48:56.523523 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-20 10:48:56.523533 | orchestrator | Saturday 20 September 2025 10:47:56 +0000 (0:00:00.763) 0:02:39.505 **** 2025-09-20 10:48:56.523540 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523546 | orchestrator | 2025-09-20 10:48:56.523553 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-20 10:48:56.523560 | orchestrator | Saturday 20 September 2025 10:47:56 +0000 (0:00:00.486) 0:02:39.991 **** 2025-09-20 10:48:56.523567 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-20 10:48:56.523574 | orchestrator | 2025-09-20 10:48:56.523580 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-20 10:48:56.523587 | orchestrator | Saturday 20 September 2025 10:47:57 +0000 (0:00:00.543) 0:02:40.535 **** 2025-09-20 10:48:56.523594 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523601 | orchestrator | 2025-09-20 10:48:56.523607 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-20 10:48:56.523614 | orchestrator | Saturday 20 September 2025 10:47:57 +0000 (0:00:00.781) 0:02:41.317 **** 2025-09-20 10:48:56.523621 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523628 | orchestrator | 2025-09-20 10:48:56.523634 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-20 10:48:56.523641 | orchestrator | Saturday 20 September 2025 10:47:58 +0000 (0:00:00.600) 0:02:41.917 **** 2025-09-20 10:48:56.523648 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 10:48:56.523655 | orchestrator | 2025-09-20 10:48:56.523662 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-20 10:48:56.523669 | orchestrator | Saturday 20 September 2025 10:48:00 +0000 (0:00:01.648) 0:02:43.566 **** 2025-09-20 10:48:56.523675 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 10:48:56.523682 | orchestrator | 2025-09-20 10:48:56.523689 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-20 10:48:56.523696 | orchestrator | Saturday 20 September 2025 10:48:01 +0000 (0:00:00.817) 0:02:44.383 **** 2025-09-20 10:48:56.523703 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523709 | orchestrator | 2025-09-20 10:48:56.523716 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-20 10:48:56.523723 | orchestrator | Saturday 20 September 2025 10:48:01 +0000 (0:00:00.444) 0:02:44.828 **** 2025-09-20 10:48:56.523730 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523736 | orchestrator | 2025-09-20 10:48:56.523743 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-20 10:48:56.523755 | orchestrator | 2025-09-20 10:48:56.523762 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-20 10:48:56.523769 | orchestrator | Saturday 20 September 2025 10:48:02 +0000 (0:00:00.728) 0:02:45.556 **** 2025-09-20 10:48:56.523775 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.523782 | orchestrator | 2025-09-20 10:48:56.523789 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-20 10:48:56.523796 | orchestrator | Saturday 20 September 2025 10:48:02 +0000 (0:00:00.198) 0:02:45.755 **** 2025-09-20 10:48:56.523802 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 10:48:56.523809 | orchestrator | 2025-09-20 10:48:56.523816 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-20 10:48:56.523823 | orchestrator | Saturday 20 September 2025 10:48:02 +0000 (0:00:00.386) 0:02:46.142 **** 2025-09-20 10:48:56.523829 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.523836 | orchestrator | 2025-09-20 10:48:56.523843 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-20 10:48:56.523849 | orchestrator | Saturday 20 September 2025 10:48:03 +0000 (0:00:00.754) 0:02:46.897 **** 2025-09-20 10:48:56.523856 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.523863 | orchestrator | 2025-09-20 10:48:56.523870 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-20 10:48:56.523876 | orchestrator | Saturday 20 September 2025 10:48:04 +0000 (0:00:01.394) 0:02:48.292 **** 2025-09-20 10:48:56.523883 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523890 | orchestrator | 2025-09-20 10:48:56.523897 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-20 10:48:56.523904 | orchestrator | Saturday 20 September 2025 10:48:05 +0000 (0:00:00.746) 0:02:49.039 **** 2025-09-20 10:48:56.523910 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.523917 | orchestrator | 2025-09-20 10:48:56.523924 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-20 10:48:56.523931 | orchestrator | Saturday 20 September 2025 10:48:06 +0000 (0:00:00.436) 0:02:49.476 **** 2025-09-20 10:48:56.523937 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523944 | orchestrator | 2025-09-20 10:48:56.523951 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-20 10:48:56.523958 | orchestrator | Saturday 20 September 2025 10:48:13 +0000 (0:00:06.902) 0:02:56.379 **** 2025-09-20 10:48:56.523964 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.523971 | orchestrator | 2025-09-20 10:48:56.523978 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-20 10:48:56.523985 | orchestrator | Saturday 20 September 2025 10:48:25 +0000 (0:00:12.479) 0:03:08.858 **** 2025-09-20 10:48:56.523991 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.523998 | orchestrator | 2025-09-20 10:48:56.524005 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-20 10:48:56.524012 | orchestrator | 2025-09-20 10:48:56.524018 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-20 10:48:56.524025 | orchestrator | Saturday 20 September 2025 10:48:26 +0000 (0:00:00.493) 0:03:09.351 **** 2025-09-20 10:48:56.524031 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.524038 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.524044 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.524050 | orchestrator | 2025-09-20 10:48:56.524057 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-20 10:48:56.524063 | orchestrator | Saturday 20 September 2025 10:48:26 +0000 (0:00:00.313) 0:03:09.665 **** 2025-09-20 10:48:56.524080 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524087 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.524093 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.524099 | orchestrator | 2025-09-20 10:48:56.524106 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-20 10:48:56.524116 | orchestrator | Saturday 20 September 2025 10:48:26 +0000 (0:00:00.362) 0:03:10.028 **** 2025-09-20 10:48:56.524123 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:48:56.524129 | orchestrator | 2025-09-20 10:48:56.524135 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-20 10:48:56.524141 | orchestrator | Saturday 20 September 2025 10:48:27 +0000 (0:00:00.623) 0:03:10.651 **** 2025-09-20 10:48:56.524147 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524153 | orchestrator | 2025-09-20 10:48:56.524160 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-20 10:48:56.524166 | orchestrator | Saturday 20 September 2025 10:48:27 +0000 (0:00:00.220) 0:03:10.872 **** 2025-09-20 10:48:56.524172 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524178 | orchestrator | 2025-09-20 10:48:56.524184 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-20 10:48:56.524191 | orchestrator | Saturday 20 September 2025 10:48:27 +0000 (0:00:00.225) 0:03:11.097 **** 2025-09-20 10:48:56.524197 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524203 | orchestrator | 2025-09-20 10:48:56.524209 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-20 10:48:56.524216 | orchestrator | Saturday 20 September 2025 10:48:28 +0000 (0:00:00.265) 0:03:11.363 **** 2025-09-20 10:48:56.524222 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524228 | orchestrator | 2025-09-20 10:48:56.524234 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-20 10:48:56.524240 | orchestrator | Saturday 20 September 2025 10:48:28 +0000 (0:00:00.263) 0:03:11.627 **** 2025-09-20 10:48:56.524246 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524253 | orchestrator | 2025-09-20 10:48:56.524259 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-20 10:48:56.524265 | orchestrator | Saturday 20 September 2025 10:48:28 +0000 (0:00:00.203) 0:03:11.830 **** 2025-09-20 10:48:56.524271 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524278 | orchestrator | 2025-09-20 10:48:56.524284 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-20 10:48:56.524290 | orchestrator | Saturday 20 September 2025 10:48:28 +0000 (0:00:00.189) 0:03:12.019 **** 2025-09-20 10:48:56.524296 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524302 | orchestrator | 2025-09-20 10:48:56.524308 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-20 10:48:56.524315 | orchestrator | Saturday 20 September 2025 10:48:28 +0000 (0:00:00.197) 0:03:12.217 **** 2025-09-20 10:48:56.524321 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524327 | orchestrator | 2025-09-20 10:48:56.524333 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-20 10:48:56.524339 | orchestrator | Saturday 20 September 2025 10:48:29 +0000 (0:00:00.211) 0:03:12.429 **** 2025-09-20 10:48:56.524346 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524352 | orchestrator | 2025-09-20 10:48:56.524358 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-20 10:48:56.524364 | orchestrator | Saturday 20 September 2025 10:48:29 +0000 (0:00:00.295) 0:03:12.724 **** 2025-09-20 10:48:56.524370 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-20 10:48:56.524377 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-20 10:48:56.524383 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524389 | orchestrator | 2025-09-20 10:48:56.524395 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-20 10:48:56.524401 | orchestrator | Saturday 20 September 2025 10:48:29 +0000 (0:00:00.551) 0:03:13.276 **** 2025-09-20 10:48:56.524408 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524414 | orchestrator | 2025-09-20 10:48:56.524420 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-20 10:48:56.524431 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:00.193) 0:03:13.470 **** 2025-09-20 10:48:56.524437 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524443 | orchestrator | 2025-09-20 10:48:56.524449 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-20 10:48:56.524455 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:00.207) 0:03:13.677 **** 2025-09-20 10:48:56.524462 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524468 | orchestrator | 2025-09-20 10:48:56.524490 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-20 10:48:56.524497 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:00.253) 0:03:13.931 **** 2025-09-20 10:48:56.524503 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524509 | orchestrator | 2025-09-20 10:48:56.524515 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-20 10:48:56.524521 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:00.197) 0:03:14.129 **** 2025-09-20 10:48:56.524528 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524534 | orchestrator | 2025-09-20 10:48:56.524540 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-20 10:48:56.524547 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:00.180) 0:03:14.310 **** 2025-09-20 10:48:56.524553 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524559 | orchestrator | 2025-09-20 10:48:56.524565 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-20 10:48:56.524571 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:00.170) 0:03:14.480 **** 2025-09-20 10:48:56.524577 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524583 | orchestrator | 2025-09-20 10:48:56.524590 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-20 10:48:56.524596 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:00.181) 0:03:14.661 **** 2025-09-20 10:48:56.524609 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524616 | orchestrator | 2025-09-20 10:48:56.524622 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-20 10:48:56.524628 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:00.227) 0:03:14.889 **** 2025-09-20 10:48:56.524634 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524641 | orchestrator | 2025-09-20 10:48:56.524647 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-20 10:48:56.524653 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:00.163) 0:03:15.052 **** 2025-09-20 10:48:56.524659 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524666 | orchestrator | 2025-09-20 10:48:56.524672 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-20 10:48:56.524678 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:00.199) 0:03:15.252 **** 2025-09-20 10:48:56.524684 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524690 | orchestrator | 2025-09-20 10:48:56.524697 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-20 10:48:56.524703 | orchestrator | Saturday 20 September 2025 10:48:32 +0000 (0:00:00.225) 0:03:15.478 **** 2025-09-20 10:48:56.524709 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-20 10:48:56.524716 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-20 10:48:56.524722 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-20 10:48:56.524728 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-20 10:48:56.524734 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524740 | orchestrator | 2025-09-20 10:48:56.524746 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-20 10:48:56.524753 | orchestrator | Saturday 20 September 2025 10:48:32 +0000 (0:00:00.834) 0:03:16.312 **** 2025-09-20 10:48:56.524759 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524765 | orchestrator | 2025-09-20 10:48:56.524775 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-20 10:48:56.524782 | orchestrator | Saturday 20 September 2025 10:48:33 +0000 (0:00:00.263) 0:03:16.575 **** 2025-09-20 10:48:56.524788 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524794 | orchestrator | 2025-09-20 10:48:56.524800 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-20 10:48:56.524806 | orchestrator | Saturday 20 September 2025 10:48:33 +0000 (0:00:00.185) 0:03:16.760 **** 2025-09-20 10:48:56.524813 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524819 | orchestrator | 2025-09-20 10:48:56.524825 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-20 10:48:56.524832 | orchestrator | Saturday 20 September 2025 10:48:33 +0000 (0:00:00.187) 0:03:16.948 **** 2025-09-20 10:48:56.524838 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524844 | orchestrator | 2025-09-20 10:48:56.524850 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-20 10:48:56.524857 | orchestrator | Saturday 20 September 2025 10:48:33 +0000 (0:00:00.194) 0:03:17.142 **** 2025-09-20 10:48:56.524863 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-20 10:48:56.524869 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-20 10:48:56.524875 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524882 | orchestrator | 2025-09-20 10:48:56.524888 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-20 10:48:56.524894 | orchestrator | Saturday 20 September 2025 10:48:34 +0000 (0:00:00.253) 0:03:17.395 **** 2025-09-20 10:48:56.524900 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.524906 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.524913 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.524919 | orchestrator | 2025-09-20 10:48:56.524925 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-20 10:48:56.524931 | orchestrator | Saturday 20 September 2025 10:48:34 +0000 (0:00:00.280) 0:03:17.676 **** 2025-09-20 10:48:56.524938 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.524944 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.524950 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.524956 | orchestrator | 2025-09-20 10:48:56.524962 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-20 10:48:56.524969 | orchestrator | 2025-09-20 10:48:56.524975 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-20 10:48:56.524981 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:00.971) 0:03:18.648 **** 2025-09-20 10:48:56.524987 | orchestrator | ok: [testbed-manager] 2025-09-20 10:48:56.524994 | orchestrator | 2025-09-20 10:48:56.525000 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-20 10:48:56.525006 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:00.109) 0:03:18.757 **** 2025-09-20 10:48:56.525012 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 10:48:56.525019 | orchestrator | 2025-09-20 10:48:56.525025 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-20 10:48:56.525031 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:00.209) 0:03:18.966 **** 2025-09-20 10:48:56.525037 | orchestrator | changed: [testbed-manager] 2025-09-20 10:48:56.525043 | orchestrator | 2025-09-20 10:48:56.525050 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-20 10:48:56.525056 | orchestrator | 2025-09-20 10:48:56.525062 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-20 10:48:56.525068 | orchestrator | Saturday 20 September 2025 10:48:40 +0000 (0:00:05.225) 0:03:24.192 **** 2025-09-20 10:48:56.525075 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:48:56.525081 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:48:56.525087 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:48:56.525097 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:48:56.525103 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:48:56.525110 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:48:56.525116 | orchestrator | 2025-09-20 10:48:56.525126 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-20 10:48:56.525135 | orchestrator | Saturday 20 September 2025 10:48:41 +0000 (0:00:00.667) 0:03:24.860 **** 2025-09-20 10:48:56.525142 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-20 10:48:56.525148 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-20 10:48:56.525155 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-20 10:48:56.525161 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-20 10:48:56.525167 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-20 10:48:56.525173 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-20 10:48:56.525180 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-20 10:48:56.525186 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-20 10:48:56.525192 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-20 10:48:56.525198 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-20 10:48:56.525205 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-20 10:48:56.525211 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-20 10:48:56.525217 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-20 10:48:56.525223 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-20 10:48:56.525229 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-20 10:48:56.525236 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-20 10:48:56.525242 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-20 10:48:56.525248 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-20 10:48:56.525254 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-20 10:48:56.525260 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-20 10:48:56.525267 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-20 10:48:56.525273 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-20 10:48:56.525279 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-20 10:48:56.525285 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-20 10:48:56.525292 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-20 10:48:56.525298 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-20 10:48:56.525304 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-20 10:48:56.525310 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-20 10:48:56.525316 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-20 10:48:56.525322 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-20 10:48:56.525329 | orchestrator | 2025-09-20 10:48:56.525335 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-20 10:48:56.525346 | orchestrator | Saturday 20 September 2025 10:48:54 +0000 (0:00:12.845) 0:03:37.705 **** 2025-09-20 10:48:56.525353 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.525359 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.525365 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.525372 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.525378 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.525384 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.525390 | orchestrator | 2025-09-20 10:48:56.525397 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-20 10:48:56.525403 | orchestrator | Saturday 20 September 2025 10:48:54 +0000 (0:00:00.612) 0:03:38.317 **** 2025-09-20 10:48:56.525409 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:48:56.525415 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:48:56.525421 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:48:56.525427 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:48:56.525434 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:48:56.525440 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:48:56.525446 | orchestrator | 2025-09-20 10:48:56.525452 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:48:56.525458 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:48:56.525466 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-20 10:48:56.525495 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 10:48:56.525502 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 10:48:56.525509 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 10:48:56.525515 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 10:48:56.525522 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 10:48:56.525528 | orchestrator | 2025-09-20 10:48:56.525534 | orchestrator | 2025-09-20 10:48:56.525541 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:48:56.525547 | orchestrator | Saturday 20 September 2025 10:48:55 +0000 (0:00:00.417) 0:03:38.735 **** 2025-09-20 10:48:56.525554 | orchestrator | =============================================================================== 2025-09-20 10:48:56.525560 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.57s 2025-09-20 10:48:56.525566 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.64s 2025-09-20 10:48:56.525573 | orchestrator | Manage labels ---------------------------------------------------------- 12.85s 2025-09-20 10:48:56.525579 | orchestrator | kubectl : Install required packages ------------------------------------ 12.48s 2025-09-20 10:48:56.525585 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.83s 2025-09-20 10:48:56.525592 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.90s 2025-09-20 10:48:56.525598 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.51s 2025-09-20 10:48:56.525604 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.23s 2025-09-20 10:48:56.525610 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.82s 2025-09-20 10:48:56.525621 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.80s 2025-09-20 10:48:56.525627 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.79s 2025-09-20 10:48:56.525633 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.18s 2025-09-20 10:48:56.525640 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.97s 2025-09-20 10:48:56.525646 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.96s 2025-09-20 10:48:56.525652 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.95s 2025-09-20 10:48:56.525658 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.88s 2025-09-20 10:48:56.525664 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.72s 2025-09-20 10:48:56.525670 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2025-09-20 10:48:56.525677 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.59s 2025-09-20 10:48:56.525683 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.56s 2025-09-20 10:48:56.525689 | orchestrator | 2025-09-20 10:48:56 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:56.526126 | orchestrator | 2025-09-20 10:48:56 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:56.529463 | orchestrator | 2025-09-20 10:48:56 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:56.529505 | orchestrator | 2025-09-20 10:48:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:48:59.677861 | orchestrator | 2025-09-20 10:48:59 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:48:59.679462 | orchestrator | 2025-09-20 10:48:59 | INFO  | Task a26a83fa-4454-4432-894a-286841e00693 is in state STARTED 2025-09-20 10:48:59.680295 | orchestrator | 2025-09-20 10:48:59 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:48:59.682878 | orchestrator | 2025-09-20 10:48:59 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:48:59.683453 | orchestrator | 2025-09-20 10:48:59 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:48:59.686733 | orchestrator | 2025-09-20 10:48:59 | INFO  | Task 22d3ae8f-9329-409e-87e8-b70b88ec3f82 is in state STARTED 2025-09-20 10:48:59.686780 | orchestrator | 2025-09-20 10:48:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:02.717611 | orchestrator | 2025-09-20 10:49:02 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:02.766282 | orchestrator | 2025-09-20 10:49:02 | INFO  | Task a26a83fa-4454-4432-894a-286841e00693 is in state STARTED 2025-09-20 10:49:02.766360 | orchestrator | 2025-09-20 10:49:02 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:02.767258 | orchestrator | 2025-09-20 10:49:02 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:02.770077 | orchestrator | 2025-09-20 10:49:02 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:02.777118 | orchestrator | 2025-09-20 10:49:02 | INFO  | Task 22d3ae8f-9329-409e-87e8-b70b88ec3f82 is in state STARTED 2025-09-20 10:49:02.777142 | orchestrator | 2025-09-20 10:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:05.969930 | orchestrator | 2025-09-20 10:49:05 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:05.970293 | orchestrator | 2025-09-20 10:49:05 | INFO  | Task a26a83fa-4454-4432-894a-286841e00693 is in state SUCCESS 2025-09-20 10:49:05.971334 | orchestrator | 2025-09-20 10:49:05 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:05.971810 | orchestrator | 2025-09-20 10:49:05 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:05.972646 | orchestrator | 2025-09-20 10:49:05 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:05.973298 | orchestrator | 2025-09-20 10:49:05 | INFO  | Task 22d3ae8f-9329-409e-87e8-b70b88ec3f82 is in state STARTED 2025-09-20 10:49:05.973322 | orchestrator | 2025-09-20 10:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:09.018002 | orchestrator | 2025-09-20 10:49:09 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:09.022448 | orchestrator | 2025-09-20 10:49:09 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:09.023270 | orchestrator | 2025-09-20 10:49:09 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:09.024308 | orchestrator | 2025-09-20 10:49:09 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:09.026184 | orchestrator | 2025-09-20 10:49:09 | INFO  | Task 22d3ae8f-9329-409e-87e8-b70b88ec3f82 is in state SUCCESS 2025-09-20 10:49:09.026209 | orchestrator | 2025-09-20 10:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:12.064529 | orchestrator | 2025-09-20 10:49:12 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:12.065225 | orchestrator | 2025-09-20 10:49:12 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:12.065847 | orchestrator | 2025-09-20 10:49:12 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:12.067140 | orchestrator | 2025-09-20 10:49:12 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:12.067235 | orchestrator | 2025-09-20 10:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:15.107325 | orchestrator | 2025-09-20 10:49:15 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:15.107433 | orchestrator | 2025-09-20 10:49:15 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:15.107446 | orchestrator | 2025-09-20 10:49:15 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:15.107456 | orchestrator | 2025-09-20 10:49:15 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:15.107467 | orchestrator | 2025-09-20 10:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:18.139557 | orchestrator | 2025-09-20 10:49:18 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:18.139952 | orchestrator | 2025-09-20 10:49:18 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:18.143100 | orchestrator | 2025-09-20 10:49:18 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:18.143150 | orchestrator | 2025-09-20 10:49:18 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:18.143164 | orchestrator | 2025-09-20 10:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:21.179739 | orchestrator | 2025-09-20 10:49:21 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:21.181632 | orchestrator | 2025-09-20 10:49:21 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:21.184067 | orchestrator | 2025-09-20 10:49:21 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:21.186378 | orchestrator | 2025-09-20 10:49:21 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:21.186879 | orchestrator | 2025-09-20 10:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:24.229976 | orchestrator | 2025-09-20 10:49:24 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:24.230404 | orchestrator | 2025-09-20 10:49:24 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:24.232173 | orchestrator | 2025-09-20 10:49:24 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:24.233453 | orchestrator | 2025-09-20 10:49:24 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:24.233606 | orchestrator | 2025-09-20 10:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:27.277117 | orchestrator | 2025-09-20 10:49:27 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:27.279630 | orchestrator | 2025-09-20 10:49:27 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:27.281074 | orchestrator | 2025-09-20 10:49:27 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:27.283275 | orchestrator | 2025-09-20 10:49:27 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:27.283360 | orchestrator | 2025-09-20 10:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:30.346193 | orchestrator | 2025-09-20 10:49:30 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:30.347776 | orchestrator | 2025-09-20 10:49:30 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:30.349378 | orchestrator | 2025-09-20 10:49:30 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:30.350688 | orchestrator | 2025-09-20 10:49:30 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:30.350718 | orchestrator | 2025-09-20 10:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:33.395784 | orchestrator | 2025-09-20 10:49:33 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:33.397439 | orchestrator | 2025-09-20 10:49:33 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:33.399613 | orchestrator | 2025-09-20 10:49:33 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:33.402167 | orchestrator | 2025-09-20 10:49:33 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:33.402225 | orchestrator | 2025-09-20 10:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:36.436272 | orchestrator | 2025-09-20 10:49:36 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:36.437692 | orchestrator | 2025-09-20 10:49:36 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:36.439921 | orchestrator | 2025-09-20 10:49:36 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:36.440800 | orchestrator | 2025-09-20 10:49:36 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:36.440836 | orchestrator | 2025-09-20 10:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:39.477453 | orchestrator | 2025-09-20 10:49:39 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:39.477877 | orchestrator | 2025-09-20 10:49:39 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:39.479885 | orchestrator | 2025-09-20 10:49:39 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:39.481355 | orchestrator | 2025-09-20 10:49:39 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:39.481380 | orchestrator | 2025-09-20 10:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:42.510077 | orchestrator | 2025-09-20 10:49:42 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:42.512351 | orchestrator | 2025-09-20 10:49:42 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:42.514768 | orchestrator | 2025-09-20 10:49:42 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:42.516462 | orchestrator | 2025-09-20 10:49:42 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:42.516788 | orchestrator | 2025-09-20 10:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:45.560932 | orchestrator | 2025-09-20 10:49:45 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:45.563218 | orchestrator | 2025-09-20 10:49:45 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:45.565819 | orchestrator | 2025-09-20 10:49:45 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:45.566867 | orchestrator | 2025-09-20 10:49:45 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:45.566914 | orchestrator | 2025-09-20 10:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:48.619579 | orchestrator | 2025-09-20 10:49:48 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:48.620471 | orchestrator | 2025-09-20 10:49:48 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:48.622135 | orchestrator | 2025-09-20 10:49:48 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:48.622927 | orchestrator | 2025-09-20 10:49:48 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:48.622951 | orchestrator | 2025-09-20 10:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:51.674648 | orchestrator | 2025-09-20 10:49:51 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:51.676089 | orchestrator | 2025-09-20 10:49:51 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:51.677672 | orchestrator | 2025-09-20 10:49:51 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:51.679568 | orchestrator | 2025-09-20 10:49:51 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:51.679595 | orchestrator | 2025-09-20 10:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:54.716293 | orchestrator | 2025-09-20 10:49:54 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:54.718247 | orchestrator | 2025-09-20 10:49:54 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:54.721008 | orchestrator | 2025-09-20 10:49:54 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:54.723675 | orchestrator | 2025-09-20 10:49:54 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:54.723739 | orchestrator | 2025-09-20 10:49:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:49:57.768606 | orchestrator | 2025-09-20 10:49:57 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:49:57.770419 | orchestrator | 2025-09-20 10:49:57 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:49:57.772869 | orchestrator | 2025-09-20 10:49:57 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:49:57.775858 | orchestrator | 2025-09-20 10:49:57 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:49:57.775978 | orchestrator | 2025-09-20 10:49:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:00.809825 | orchestrator | 2025-09-20 10:50:00 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:00.810751 | orchestrator | 2025-09-20 10:50:00 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:50:00.811326 | orchestrator | 2025-09-20 10:50:00 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:00.812338 | orchestrator | 2025-09-20 10:50:00 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:00.812365 | orchestrator | 2025-09-20 10:50:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:03.850116 | orchestrator | 2025-09-20 10:50:03 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:03.850374 | orchestrator | 2025-09-20 10:50:03 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:50:03.851240 | orchestrator | 2025-09-20 10:50:03 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:03.852019 | orchestrator | 2025-09-20 10:50:03 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:03.852043 | orchestrator | 2025-09-20 10:50:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:06.889080 | orchestrator | 2025-09-20 10:50:06 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:06.891448 | orchestrator | 2025-09-20 10:50:06 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:50:06.892528 | orchestrator | 2025-09-20 10:50:06 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:06.894112 | orchestrator | 2025-09-20 10:50:06 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:06.894138 | orchestrator | 2025-09-20 10:50:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:09.922730 | orchestrator | 2025-09-20 10:50:09 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:09.922902 | orchestrator | 2025-09-20 10:50:09 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:50:09.923579 | orchestrator | 2025-09-20 10:50:09 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:09.924658 | orchestrator | 2025-09-20 10:50:09 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:09.924679 | orchestrator | 2025-09-20 10:50:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:12.950626 | orchestrator | 2025-09-20 10:50:12 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:12.953595 | orchestrator | 2025-09-20 10:50:12 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state STARTED 2025-09-20 10:50:12.955624 | orchestrator | 2025-09-20 10:50:12 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:12.957246 | orchestrator | 2025-09-20 10:50:12 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:12.957449 | orchestrator | 2025-09-20 10:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:15.989163 | orchestrator | 2025-09-20 10:50:15 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:15.989284 | orchestrator | 2025-09-20 10:50:15 | INFO  | Task 8237fe49-2ab1-4bc7-ad8f-c43cf2dadac4 is in state SUCCESS 2025-09-20 10:50:15.989973 | orchestrator | 2025-09-20 10:50:15.990003 | orchestrator | 2025-09-20 10:50:15.990064 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-20 10:50:15.990078 | orchestrator | 2025-09-20 10:50:15.990089 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-20 10:50:15.990099 | orchestrator | Saturday 20 September 2025 10:49:00 +0000 (0:00:00.206) 0:00:00.206 **** 2025-09-20 10:50:15.990110 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-20 10:50:15.990120 | orchestrator | 2025-09-20 10:50:15.990130 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-20 10:50:15.990140 | orchestrator | Saturday 20 September 2025 10:49:01 +0000 (0:00:00.776) 0:00:00.982 **** 2025-09-20 10:50:15.990150 | orchestrator | changed: [testbed-manager] 2025-09-20 10:50:15.990160 | orchestrator | 2025-09-20 10:50:15.990169 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-20 10:50:15.990179 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:01.061) 0:00:02.044 **** 2025-09-20 10:50:15.990189 | orchestrator | changed: [testbed-manager] 2025-09-20 10:50:15.990198 | orchestrator | 2025-09-20 10:50:15.990208 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:50:15.990218 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:50:15.990230 | orchestrator | 2025-09-20 10:50:15.990239 | orchestrator | 2025-09-20 10:50:15.990249 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:50:15.990259 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:00.654) 0:00:02.699 **** 2025-09-20 10:50:15.990269 | orchestrator | =============================================================================== 2025-09-20 10:50:15.990278 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2025-09-20 10:50:15.990288 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-09-20 10:50:15.990297 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.65s 2025-09-20 10:50:15.990306 | orchestrator | 2025-09-20 10:50:15.990316 | orchestrator | 2025-09-20 10:50:15.990326 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-20 10:50:15.990336 | orchestrator | 2025-09-20 10:50:15.990345 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-20 10:50:15.990355 | orchestrator | Saturday 20 September 2025 10:48:59 +0000 (0:00:00.165) 0:00:00.165 **** 2025-09-20 10:50:15.990365 | orchestrator | ok: [testbed-manager] 2025-09-20 10:50:15.990376 | orchestrator | 2025-09-20 10:50:15.990386 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-20 10:50:15.990412 | orchestrator | Saturday 20 September 2025 10:49:00 +0000 (0:00:00.511) 0:00:00.677 **** 2025-09-20 10:50:15.990423 | orchestrator | ok: [testbed-manager] 2025-09-20 10:50:15.990449 | orchestrator | 2025-09-20 10:50:15.990511 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-20 10:50:15.990606 | orchestrator | Saturday 20 September 2025 10:49:00 +0000 (0:00:00.490) 0:00:01.168 **** 2025-09-20 10:50:15.990622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-20 10:50:15.990633 | orchestrator | 2025-09-20 10:50:15.990664 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-20 10:50:15.990675 | orchestrator | Saturday 20 September 2025 10:49:01 +0000 (0:00:00.617) 0:00:01.785 **** 2025-09-20 10:50:15.990685 | orchestrator | changed: [testbed-manager] 2025-09-20 10:50:15.990695 | orchestrator | 2025-09-20 10:50:15.990705 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-20 10:50:15.990715 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:01.018) 0:00:02.803 **** 2025-09-20 10:50:15.990725 | orchestrator | changed: [testbed-manager] 2025-09-20 10:50:15.990734 | orchestrator | 2025-09-20 10:50:15.990744 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-20 10:50:15.990754 | orchestrator | Saturday 20 September 2025 10:49:03 +0000 (0:00:00.778) 0:00:03.582 **** 2025-09-20 10:50:15.990764 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 10:50:15.990774 | orchestrator | 2025-09-20 10:50:15.990784 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-20 10:50:15.990794 | orchestrator | Saturday 20 September 2025 10:49:04 +0000 (0:00:01.527) 0:00:05.109 **** 2025-09-20 10:50:15.990804 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 10:50:15.990814 | orchestrator | 2025-09-20 10:50:15.990824 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-20 10:50:15.990833 | orchestrator | Saturday 20 September 2025 10:49:05 +0000 (0:00:00.744) 0:00:05.853 **** 2025-09-20 10:50:15.990843 | orchestrator | ok: [testbed-manager] 2025-09-20 10:50:15.990853 | orchestrator | 2025-09-20 10:50:15.990863 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-20 10:50:15.990873 | orchestrator | Saturday 20 September 2025 10:49:06 +0000 (0:00:00.414) 0:00:06.268 **** 2025-09-20 10:50:15.990884 | orchestrator | ok: [testbed-manager] 2025-09-20 10:50:15.990894 | orchestrator | 2025-09-20 10:50:15.990904 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:50:15.990914 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:50:15.990924 | orchestrator | 2025-09-20 10:50:15.990934 | orchestrator | 2025-09-20 10:50:15.990944 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:50:15.990954 | orchestrator | Saturday 20 September 2025 10:49:06 +0000 (0:00:00.289) 0:00:06.557 **** 2025-09-20 10:50:15.990964 | orchestrator | =============================================================================== 2025-09-20 10:50:15.990974 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.53s 2025-09-20 10:50:15.990984 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2025-09-20 10:50:15.990994 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.78s 2025-09-20 10:50:15.991017 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2025-09-20 10:50:15.991028 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.62s 2025-09-20 10:50:15.991038 | orchestrator | Get home directory of operator user ------------------------------------- 0.51s 2025-09-20 10:50:15.991048 | orchestrator | Create .kube directory -------------------------------------------------- 0.49s 2025-09-20 10:50:15.991058 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2025-09-20 10:50:15.991067 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2025-09-20 10:50:15.991077 | orchestrator | 2025-09-20 10:50:15.991087 | orchestrator | 2025-09-20 10:50:15.991097 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-20 10:50:15.991107 | orchestrator | 2025-09-20 10:50:15.991117 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-20 10:50:15.991127 | orchestrator | Saturday 20 September 2025 10:48:02 +0000 (0:00:00.078) 0:00:00.078 **** 2025-09-20 10:50:15.991136 | orchestrator | ok: [localhost] => { 2025-09-20 10:50:15.991147 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-20 10:50:15.991164 | orchestrator | } 2025-09-20 10:50:15.991187 | orchestrator | 2025-09-20 10:50:15.991198 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-20 10:50:15.991207 | orchestrator | Saturday 20 September 2025 10:48:02 +0000 (0:00:00.040) 0:00:00.118 **** 2025-09-20 10:50:15.991218 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-20 10:50:15.991228 | orchestrator | ...ignoring 2025-09-20 10:50:15.991239 | orchestrator | 2025-09-20 10:50:15.991250 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-20 10:50:15.991261 | orchestrator | Saturday 20 September 2025 10:48:05 +0000 (0:00:02.628) 0:00:02.747 **** 2025-09-20 10:50:15.991271 | orchestrator | skipping: [localhost] 2025-09-20 10:50:15.991282 | orchestrator | 2025-09-20 10:50:15.991292 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-20 10:50:15.991303 | orchestrator | Saturday 20 September 2025 10:48:05 +0000 (0:00:00.051) 0:00:02.799 **** 2025-09-20 10:50:15.991314 | orchestrator | ok: [localhost] 2025-09-20 10:50:15.991324 | orchestrator | 2025-09-20 10:50:15.991335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:50:15.991345 | orchestrator | 2025-09-20 10:50:15.991356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:50:15.991366 | orchestrator | Saturday 20 September 2025 10:48:05 +0000 (0:00:00.166) 0:00:02.965 **** 2025-09-20 10:50:15.991377 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:50:15.991394 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:50:15.991405 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:50:15.991416 | orchestrator | 2025-09-20 10:50:15.991427 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:50:15.991437 | orchestrator | Saturday 20 September 2025 10:48:06 +0000 (0:00:00.299) 0:00:03.265 **** 2025-09-20 10:50:15.991448 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-20 10:50:15.991459 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-20 10:50:15.991470 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-20 10:50:15.991496 | orchestrator | 2025-09-20 10:50:15.991508 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-20 10:50:15.991518 | orchestrator | 2025-09-20 10:50:15.991529 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-20 10:50:15.991540 | orchestrator | Saturday 20 September 2025 10:48:06 +0000 (0:00:00.448) 0:00:03.713 **** 2025-09-20 10:50:15.991551 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:50:15.991562 | orchestrator | 2025-09-20 10:50:15.991572 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-20 10:50:15.991582 | orchestrator | Saturday 20 September 2025 10:48:07 +0000 (0:00:00.990) 0:00:04.703 **** 2025-09-20 10:50:15.991593 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:50:15.991603 | orchestrator | 2025-09-20 10:50:15.991614 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-20 10:50:15.991624 | orchestrator | Saturday 20 September 2025 10:48:08 +0000 (0:00:00.882) 0:00:05.586 **** 2025-09-20 10:50:15.991635 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.991644 | orchestrator | 2025-09-20 10:50:15.991654 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-20 10:50:15.991663 | orchestrator | Saturday 20 September 2025 10:48:08 +0000 (0:00:00.621) 0:00:06.207 **** 2025-09-20 10:50:15.991673 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.991682 | orchestrator | 2025-09-20 10:50:15.991692 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-20 10:50:15.991702 | orchestrator | Saturday 20 September 2025 10:48:09 +0000 (0:00:00.415) 0:00:06.623 **** 2025-09-20 10:50:15.991717 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.991727 | orchestrator | 2025-09-20 10:50:15.991737 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-20 10:50:15.991746 | orchestrator | Saturday 20 September 2025 10:48:09 +0000 (0:00:00.414) 0:00:07.037 **** 2025-09-20 10:50:15.991756 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.991765 | orchestrator | 2025-09-20 10:50:15.991775 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-20 10:50:15.991785 | orchestrator | Saturday 20 September 2025 10:48:10 +0000 (0:00:00.854) 0:00:07.891 **** 2025-09-20 10:50:15.991794 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:50:15.991804 | orchestrator | 2025-09-20 10:50:15.991814 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-20 10:50:15.991830 | orchestrator | Saturday 20 September 2025 10:48:12 +0000 (0:00:02.297) 0:00:10.189 **** 2025-09-20 10:50:15.991840 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:50:15.991850 | orchestrator | 2025-09-20 10:50:15.991859 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-20 10:50:15.991869 | orchestrator | Saturday 20 September 2025 10:48:14 +0000 (0:00:01.134) 0:00:11.324 **** 2025-09-20 10:50:15.991879 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.991888 | orchestrator | 2025-09-20 10:50:15.991898 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-20 10:50:15.991908 | orchestrator | Saturday 20 September 2025 10:48:15 +0000 (0:00:01.163) 0:00:12.487 **** 2025-09-20 10:50:15.991917 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.991927 | orchestrator | 2025-09-20 10:50:15.991936 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-20 10:50:15.991946 | orchestrator | Saturday 20 September 2025 10:48:15 +0000 (0:00:00.486) 0:00:12.973 **** 2025-09-20 10:50:15.991962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.991977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.991996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992007 | orchestrator | 2025-09-20 10:50:15.992017 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-20 10:50:15.992027 | orchestrator | Saturday 20 September 2025 10:48:17 +0000 (0:00:01.538) 0:00:14.512 **** 2025-09-20 10:50:15.992044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992177 | orchestrator | 2025-09-20 10:50:15.992187 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-20 10:50:15.992197 | orchestrator | Saturday 20 September 2025 10:48:19 +0000 (0:00:02.025) 0:00:16.537 **** 2025-09-20 10:50:15.992206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-20 10:50:15.992216 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-20 10:50:15.992226 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-20 10:50:15.992236 | orchestrator | 2025-09-20 10:50:15.992245 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-20 10:50:15.992255 | orchestrator | Saturday 20 September 2025 10:48:21 +0000 (0:00:02.155) 0:00:18.693 **** 2025-09-20 10:50:15.992264 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-20 10:50:15.992274 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-20 10:50:15.992283 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-20 10:50:15.992293 | orchestrator | 2025-09-20 10:50:15.992303 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-20 10:50:15.992320 | orchestrator | Saturday 20 September 2025 10:48:23 +0000 (0:00:02.029) 0:00:20.722 **** 2025-09-20 10:50:15.992330 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-20 10:50:15.992339 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-20 10:50:15.992349 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-20 10:50:15.992358 | orchestrator | 2025-09-20 10:50:15.992368 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-20 10:50:15.992377 | orchestrator | Saturday 20 September 2025 10:48:25 +0000 (0:00:01.868) 0:00:22.591 **** 2025-09-20 10:50:15.992387 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-20 10:50:15.992396 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-20 10:50:15.992406 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-20 10:50:15.992415 | orchestrator | 2025-09-20 10:50:15.992425 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-20 10:50:15.992435 | orchestrator | Saturday 20 September 2025 10:48:27 +0000 (0:00:01.968) 0:00:24.559 **** 2025-09-20 10:50:15.992444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-20 10:50:15.992454 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-20 10:50:15.992464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-20 10:50:15.992508 | orchestrator | 2025-09-20 10:50:15.992519 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-20 10:50:15.992529 | orchestrator | Saturday 20 September 2025 10:48:28 +0000 (0:00:01.557) 0:00:26.117 **** 2025-09-20 10:50:15.992539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-20 10:50:15.992555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-20 10:50:15.992565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-20 10:50:15.992574 | orchestrator | 2025-09-20 10:50:15.992584 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-20 10:50:15.992594 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:01.702) 0:00:27.820 **** 2025-09-20 10:50:15.992603 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:50:15.992618 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.992627 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:50:15.992637 | orchestrator | 2025-09-20 10:50:15.992647 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-20 10:50:15.992657 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:00.577) 0:00:28.397 **** 2025-09-20 10:50:15.992667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:50:15.992715 | orchestrator | 2025-09-20 10:50:15.992725 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-20 10:50:15.992735 | orchestrator | Saturday 20 September 2025 10:48:34 +0000 (0:00:02.925) 0:00:31.323 **** 2025-09-20 10:50:15.992744 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:50:15.992754 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:50:15.992763 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:50:15.992773 | orchestrator | 2025-09-20 10:50:15.992783 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-20 10:50:15.992792 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:00.916) 0:00:32.240 **** 2025-09-20 10:50:15.992802 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:50:15.992812 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:50:15.992821 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:50:15.992830 | orchestrator | 2025-09-20 10:50:15.992840 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-20 10:50:15.992850 | orchestrator | Saturday 20 September 2025 10:48:42 +0000 (0:00:07.371) 0:00:39.611 **** 2025-09-20 10:50:15.992864 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:50:15.992874 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:50:15.992884 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:50:15.992893 | orchestrator | 2025-09-20 10:50:15.992903 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-20 10:50:15.992912 | orchestrator | 2025-09-20 10:50:15.992922 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-20 10:50:15.992932 | orchestrator | Saturday 20 September 2025 10:48:43 +0000 (0:00:00.847) 0:00:40.459 **** 2025-09-20 10:50:15.992942 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:50:15.992951 | orchestrator | 2025-09-20 10:50:15.992961 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-20 10:50:15.992970 | orchestrator | Saturday 20 September 2025 10:48:43 +0000 (0:00:00.507) 0:00:40.967 **** 2025-09-20 10:50:15.992980 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:50:15.992989 | orchestrator | 2025-09-20 10:50:15.992999 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-20 10:50:15.993009 | orchestrator | Saturday 20 September 2025 10:48:44 +0000 (0:00:00.321) 0:00:41.288 **** 2025-09-20 10:50:15.993019 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:50:15.993028 | orchestrator | 2025-09-20 10:50:15.993038 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-20 10:50:15.993047 | orchestrator | Saturday 20 September 2025 10:48:46 +0000 (0:00:02.054) 0:00:43.343 **** 2025-09-20 10:50:15.993057 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:50:15.993067 | orchestrator | 2025-09-20 10:50:15.993076 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-20 10:50:15.993086 | orchestrator | 2025-09-20 10:50:15.993095 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-20 10:50:15.993105 | orchestrator | Saturday 20 September 2025 10:49:38 +0000 (0:00:52.419) 0:01:35.762 **** 2025-09-20 10:50:15.993114 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:50:15.993124 | orchestrator | 2025-09-20 10:50:15.993134 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-20 10:50:15.993143 | orchestrator | Saturday 20 September 2025 10:49:39 +0000 (0:00:00.545) 0:01:36.308 **** 2025-09-20 10:50:15.993153 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:50:15.993162 | orchestrator | 2025-09-20 10:50:15.993172 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-20 10:50:15.993181 | orchestrator | Saturday 20 September 2025 10:49:39 +0000 (0:00:00.220) 0:01:36.528 **** 2025-09-20 10:50:15.993191 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:50:15.993200 | orchestrator | 2025-09-20 10:50:15.993210 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-20 10:50:15.993220 | orchestrator | Saturday 20 September 2025 10:49:41 +0000 (0:00:01.715) 0:01:38.243 **** 2025-09-20 10:50:15.993236 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:50:15.993245 | orchestrator | 2025-09-20 10:50:15.993255 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-20 10:50:15.993264 | orchestrator | 2025-09-20 10:50:15.993274 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-20 10:50:15.993284 | orchestrator | Saturday 20 September 2025 10:49:56 +0000 (0:00:15.774) 0:01:54.017 **** 2025-09-20 10:50:15.993293 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:50:15.993303 | orchestrator | 2025-09-20 10:50:15.993317 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-20 10:50:15.993327 | orchestrator | Saturday 20 September 2025 10:49:57 +0000 (0:00:00.584) 0:01:54.602 **** 2025-09-20 10:50:15.993337 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:50:15.993346 | orchestrator | 2025-09-20 10:50:15.993356 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-20 10:50:15.993366 | orchestrator | Saturday 20 September 2025 10:49:57 +0000 (0:00:00.245) 0:01:54.847 **** 2025-09-20 10:50:15.993376 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:50:15.993385 | orchestrator | 2025-09-20 10:50:15.993395 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-20 10:50:15.993405 | orchestrator | Saturday 20 September 2025 10:49:59 +0000 (0:00:01.490) 0:01:56.338 **** 2025-09-20 10:50:15.993414 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:50:15.993424 | orchestrator | 2025-09-20 10:50:15.993433 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-20 10:50:15.993443 | orchestrator | 2025-09-20 10:50:15.993452 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-20 10:50:15.993462 | orchestrator | Saturday 20 September 2025 10:50:11 +0000 (0:00:12.698) 0:02:09.036 **** 2025-09-20 10:50:15.993471 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:50:15.993533 | orchestrator | 2025-09-20 10:50:15.993543 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-20 10:50:15.993553 | orchestrator | Saturday 20 September 2025 10:50:12 +0000 (0:00:00.533) 0:02:09.570 **** 2025-09-20 10:50:15.993562 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-20 10:50:15.993572 | orchestrator | enable_outward_rabbitmq_True 2025-09-20 10:50:15.993582 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-20 10:50:15.993591 | orchestrator | outward_rabbitmq_restart 2025-09-20 10:50:15.993601 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:50:15.993611 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:50:15.993620 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:50:15.993630 | orchestrator | 2025-09-20 10:50:15.993639 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-20 10:50:15.993649 | orchestrator | skipping: no hosts matched 2025-09-20 10:50:15.993659 | orchestrator | 2025-09-20 10:50:15.993668 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-20 10:50:15.993678 | orchestrator | skipping: no hosts matched 2025-09-20 10:50:15.993688 | orchestrator | 2025-09-20 10:50:15.993697 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-20 10:50:15.993707 | orchestrator | skipping: no hosts matched 2025-09-20 10:50:15.993717 | orchestrator | 2025-09-20 10:50:15.993726 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:50:15.993741 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-20 10:50:15.993750 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 10:50:15.993758 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:50:15.993771 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:50:15.993779 | orchestrator | 2025-09-20 10:50:15.993787 | orchestrator | 2025-09-20 10:50:15.993795 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:50:15.993803 | orchestrator | Saturday 20 September 2025 10:50:14 +0000 (0:00:02.096) 0:02:11.667 **** 2025-09-20 10:50:15.993811 | orchestrator | =============================================================================== 2025-09-20 10:50:15.993819 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.89s 2025-09-20 10:50:15.993827 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.37s 2025-09-20 10:50:15.993834 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.26s 2025-09-20 10:50:15.993842 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.93s 2025-09-20 10:50:15.993850 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.63s 2025-09-20 10:50:15.993858 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.30s 2025-09-20 10:50:15.993866 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.16s 2025-09-20 10:50:15.993874 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.10s 2025-09-20 10:50:15.993882 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.03s 2025-09-20 10:50:15.993890 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.03s 2025-09-20 10:50:15.993897 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.97s 2025-09-20 10:50:15.993905 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.87s 2025-09-20 10:50:15.993913 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.70s 2025-09-20 10:50:15.993921 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.64s 2025-09-20 10:50:15.993929 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.56s 2025-09-20 10:50:15.993937 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.54s 2025-09-20 10:50:15.993945 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.16s 2025-09-20 10:50:15.993957 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.14s 2025-09-20 10:50:15.993965 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.99s 2025-09-20 10:50:15.993973 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.92s 2025-09-20 10:50:15.993981 | orchestrator | 2025-09-20 10:50:15 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:15.993990 | orchestrator | 2025-09-20 10:50:15 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:15.993998 | orchestrator | 2025-09-20 10:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:19.022622 | orchestrator | 2025-09-20 10:50:19 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:19.022726 | orchestrator | 2025-09-20 10:50:19 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:19.024613 | orchestrator | 2025-09-20 10:50:19 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:19.025879 | orchestrator | 2025-09-20 10:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:22.057442 | orchestrator | 2025-09-20 10:50:22 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:22.057781 | orchestrator | 2025-09-20 10:50:22 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:22.058801 | orchestrator | 2025-09-20 10:50:22 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:22.058828 | orchestrator | 2025-09-20 10:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:25.097097 | orchestrator | 2025-09-20 10:50:25 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:25.098807 | orchestrator | 2025-09-20 10:50:25 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:25.100391 | orchestrator | 2025-09-20 10:50:25 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:25.100421 | orchestrator | 2025-09-20 10:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:28.138906 | orchestrator | 2025-09-20 10:50:28 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:28.139014 | orchestrator | 2025-09-20 10:50:28 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:28.139030 | orchestrator | 2025-09-20 10:50:28 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:28.139053 | orchestrator | 2025-09-20 10:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:31.162139 | orchestrator | 2025-09-20 10:50:31 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:31.166760 | orchestrator | 2025-09-20 10:50:31 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:31.167513 | orchestrator | 2025-09-20 10:50:31 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:31.167550 | orchestrator | 2025-09-20 10:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:34.208538 | orchestrator | 2025-09-20 10:50:34 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:34.210211 | orchestrator | 2025-09-20 10:50:34 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:34.212669 | orchestrator | 2025-09-20 10:50:34 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:34.212954 | orchestrator | 2025-09-20 10:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:37.260088 | orchestrator | 2025-09-20 10:50:37 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:37.262224 | orchestrator | 2025-09-20 10:50:37 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:37.264031 | orchestrator | 2025-09-20 10:50:37 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:37.264352 | orchestrator | 2025-09-20 10:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:40.305291 | orchestrator | 2025-09-20 10:50:40 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:40.305596 | orchestrator | 2025-09-20 10:50:40 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:40.306644 | orchestrator | 2025-09-20 10:50:40 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:40.306675 | orchestrator | 2025-09-20 10:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:43.357673 | orchestrator | 2025-09-20 10:50:43 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:43.359094 | orchestrator | 2025-09-20 10:50:43 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:43.361679 | orchestrator | 2025-09-20 10:50:43 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:43.361728 | orchestrator | 2025-09-20 10:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:46.399459 | orchestrator | 2025-09-20 10:50:46 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:46.399932 | orchestrator | 2025-09-20 10:50:46 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:46.402995 | orchestrator | 2025-09-20 10:50:46 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:46.403525 | orchestrator | 2025-09-20 10:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:49.448994 | orchestrator | 2025-09-20 10:50:49 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:49.450765 | orchestrator | 2025-09-20 10:50:49 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:49.453088 | orchestrator | 2025-09-20 10:50:49 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:49.453177 | orchestrator | 2025-09-20 10:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:52.503319 | orchestrator | 2025-09-20 10:50:52 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:52.503389 | orchestrator | 2025-09-20 10:50:52 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:52.503395 | orchestrator | 2025-09-20 10:50:52 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:52.504671 | orchestrator | 2025-09-20 10:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:55.555064 | orchestrator | 2025-09-20 10:50:55 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:55.556614 | orchestrator | 2025-09-20 10:50:55 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:55.560537 | orchestrator | 2025-09-20 10:50:55 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:55.560692 | orchestrator | 2025-09-20 10:50:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:50:58.607290 | orchestrator | 2025-09-20 10:50:58 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:50:58.608835 | orchestrator | 2025-09-20 10:50:58 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:50:58.610513 | orchestrator | 2025-09-20 10:50:58 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:50:58.610713 | orchestrator | 2025-09-20 10:50:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:01.657511 | orchestrator | 2025-09-20 10:51:01 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:01.659944 | orchestrator | 2025-09-20 10:51:01 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:01.661503 | orchestrator | 2025-09-20 10:51:01 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:51:01.661548 | orchestrator | 2025-09-20 10:51:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:04.722616 | orchestrator | 2025-09-20 10:51:04 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:04.724766 | orchestrator | 2025-09-20 10:51:04 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:04.727043 | orchestrator | 2025-09-20 10:51:04 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:51:04.727090 | orchestrator | 2025-09-20 10:51:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:07.769190 | orchestrator | 2025-09-20 10:51:07 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:07.772053 | orchestrator | 2025-09-20 10:51:07 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:07.773074 | orchestrator | 2025-09-20 10:51:07 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state STARTED 2025-09-20 10:51:07.773173 | orchestrator | 2025-09-20 10:51:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:10.825916 | orchestrator | 2025-09-20 10:51:10 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:10.827746 | orchestrator | 2025-09-20 10:51:10 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:10.830898 | orchestrator | 2025-09-20 10:51:10 | INFO  | Task 3dacabaa-3453-4c4d-9a9e-368cee0f23a0 is in state SUCCESS 2025-09-20 10:51:10.832853 | orchestrator | 2025-09-20 10:51:10.832885 | orchestrator | 2025-09-20 10:51:10.832898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:51:10.832910 | orchestrator | 2025-09-20 10:51:10.832921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:51:10.832932 | orchestrator | Saturday 20 September 2025 10:48:45 +0000 (0:00:00.144) 0:00:00.144 **** 2025-09-20 10:51:10.832943 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:51:10.832955 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:51:10.832966 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:51:10.832977 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.832988 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.832998 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.833009 | orchestrator | 2025-09-20 10:51:10.833020 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:51:10.833068 | orchestrator | Saturday 20 September 2025 10:48:46 +0000 (0:00:00.956) 0:00:01.101 **** 2025-09-20 10:51:10.833081 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-20 10:51:10.833118 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-20 10:51:10.833129 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-20 10:51:10.833140 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-20 10:51:10.833211 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-20 10:51:10.833224 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-20 10:51:10.833354 | orchestrator | 2025-09-20 10:51:10.833366 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-20 10:51:10.833446 | orchestrator | 2025-09-20 10:51:10.833459 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-20 10:51:10.833472 | orchestrator | Saturday 20 September 2025 10:48:48 +0000 (0:00:02.085) 0:00:03.186 **** 2025-09-20 10:51:10.833485 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:51:10.833498 | orchestrator | 2025-09-20 10:51:10.833510 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-20 10:51:10.833536 | orchestrator | Saturday 20 September 2025 10:48:49 +0000 (0:00:01.012) 0:00:04.199 **** 2025-09-20 10:51:10.833552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833940 | orchestrator | 2025-09-20 10:51:10.833962 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-20 10:51:10.833974 | orchestrator | Saturday 20 September 2025 10:48:50 +0000 (0:00:01.443) 0:00:05.642 **** 2025-09-20 10:51:10.833985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.833997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834115 | orchestrator | 2025-09-20 10:51:10.834126 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-20 10:51:10.834137 | orchestrator | Saturday 20 September 2025 10:48:53 +0000 (0:00:02.868) 0:00:08.511 **** 2025-09-20 10:51:10.834148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834225 | orchestrator | 2025-09-20 10:51:10.834236 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-20 10:51:10.834258 | orchestrator | Saturday 20 September 2025 10:48:54 +0000 (0:00:01.141) 0:00:09.652 **** 2025-09-20 10:51:10.834269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834337 | orchestrator | 2025-09-20 10:51:10.834354 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-20 10:51:10.834366 | orchestrator | Saturday 20 September 2025 10:48:56 +0000 (0:00:01.907) 0:00:11.560 **** 2025-09-20 10:51:10.834397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.834485 | orchestrator | 2025-09-20 10:51:10.834496 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-20 10:51:10.834509 | orchestrator | Saturday 20 September 2025 10:48:58 +0000 (0:00:02.061) 0:00:13.621 **** 2025-09-20 10:51:10.834521 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:51:10.834534 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:51:10.834546 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.834558 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.834570 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:51:10.834582 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.834594 | orchestrator | 2025-09-20 10:51:10.834606 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-20 10:51:10.834618 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:03.170) 0:00:16.792 **** 2025-09-20 10:51:10.834630 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-20 10:51:10.834642 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-20 10:51:10.834654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-20 10:51:10.834666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-20 10:51:10.834677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-20 10:51:10.834689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-20 10:51:10.834701 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 10:51:10.834713 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 10:51:10.834731 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 10:51:10.834744 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 10:51:10.834756 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 10:51:10.834782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 10:51:10.834794 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 10:51:10.834807 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 10:51:10.834818 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 10:51:10.834829 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 10:51:10.834840 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 10:51:10.834850 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 10:51:10.834862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 10:51:10.834873 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 10:51:10.834888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 10:51:10.834899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 10:51:10.834910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 10:51:10.834921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 10:51:10.834932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 10:51:10.834943 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 10:51:10.834954 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 10:51:10.834965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 10:51:10.834975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 10:51:10.834986 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 10:51:10.834997 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 10:51:10.835008 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 10:51:10.835019 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 10:51:10.835030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 10:51:10.835041 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 10:51:10.835052 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-20 10:51:10.835063 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-20 10:51:10.835074 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 10:51:10.835085 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-20 10:51:10.835095 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-20 10:51:10.835112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-20 10:51:10.835123 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-20 10:51:10.835134 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-20 10:51:10.835145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-20 10:51:10.835162 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-20 10:51:10.835173 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-20 10:51:10.835184 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-20 10:51:10.835195 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-20 10:51:10.835206 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-20 10:51:10.835217 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-20 10:51:10.835228 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-20 10:51:10.835239 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-20 10:51:10.835250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-20 10:51:10.835261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-20 10:51:10.835272 | orchestrator | 2025-09-20 10:51:10.835283 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 10:51:10.835294 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:18.236) 0:00:35.028 **** 2025-09-20 10:51:10.835305 | orchestrator | 2025-09-20 10:51:10.835320 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 10:51:10.835331 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.261) 0:00:35.290 **** 2025-09-20 10:51:10.835342 | orchestrator | 2025-09-20 10:51:10.835353 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 10:51:10.835364 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.066) 0:00:35.356 **** 2025-09-20 10:51:10.835407 | orchestrator | 2025-09-20 10:51:10.835418 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 10:51:10.835429 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.066) 0:00:35.423 **** 2025-09-20 10:51:10.835440 | orchestrator | 2025-09-20 10:51:10.835451 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 10:51:10.835462 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.063) 0:00:35.487 **** 2025-09-20 10:51:10.835473 | orchestrator | 2025-09-20 10:51:10.835484 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 10:51:10.835495 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.065) 0:00:35.552 **** 2025-09-20 10:51:10.835506 | orchestrator | 2025-09-20 10:51:10.835517 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-20 10:51:10.835528 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.062) 0:00:35.615 **** 2025-09-20 10:51:10.835539 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:51:10.835550 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:51:10.835568 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:51:10.835578 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.835589 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.835600 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.835611 | orchestrator | 2025-09-20 10:51:10.835622 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-20 10:51:10.835632 | orchestrator | Saturday 20 September 2025 10:49:22 +0000 (0:00:01.616) 0:00:37.231 **** 2025-09-20 10:51:10.835643 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.835654 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:51:10.835665 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.835676 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.835687 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:51:10.835697 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:51:10.835708 | orchestrator | 2025-09-20 10:51:10.835719 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-20 10:51:10.835730 | orchestrator | 2025-09-20 10:51:10.835741 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-20 10:51:10.835752 | orchestrator | Saturday 20 September 2025 10:49:55 +0000 (0:00:33.172) 0:01:10.403 **** 2025-09-20 10:51:10.835763 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:51:10.835774 | orchestrator | 2025-09-20 10:51:10.835785 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-20 10:51:10.835796 | orchestrator | Saturday 20 September 2025 10:49:56 +0000 (0:00:00.703) 0:01:11.107 **** 2025-09-20 10:51:10.835806 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:51:10.835817 | orchestrator | 2025-09-20 10:51:10.835828 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-20 10:51:10.835839 | orchestrator | Saturday 20 September 2025 10:49:56 +0000 (0:00:00.538) 0:01:11.646 **** 2025-09-20 10:51:10.835850 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.835861 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.835872 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.835883 | orchestrator | 2025-09-20 10:51:10.835894 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-20 10:51:10.835905 | orchestrator | Saturday 20 September 2025 10:49:57 +0000 (0:00:01.008) 0:01:12.654 **** 2025-09-20 10:51:10.835916 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.835926 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.835937 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.835954 | orchestrator | 2025-09-20 10:51:10.835965 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-20 10:51:10.835976 | orchestrator | Saturday 20 September 2025 10:49:58 +0000 (0:00:00.342) 0:01:12.997 **** 2025-09-20 10:51:10.835987 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.835998 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.836009 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.836019 | orchestrator | 2025-09-20 10:51:10.836030 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-20 10:51:10.836041 | orchestrator | Saturday 20 September 2025 10:49:58 +0000 (0:00:00.326) 0:01:13.324 **** 2025-09-20 10:51:10.836052 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.836063 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.836074 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.836085 | orchestrator | 2025-09-20 10:51:10.836096 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-20 10:51:10.836106 | orchestrator | Saturday 20 September 2025 10:49:58 +0000 (0:00:00.347) 0:01:13.672 **** 2025-09-20 10:51:10.836117 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.836128 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.836139 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.836149 | orchestrator | 2025-09-20 10:51:10.836161 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-20 10:51:10.836177 | orchestrator | Saturday 20 September 2025 10:49:59 +0000 (0:00:00.583) 0:01:14.255 **** 2025-09-20 10:51:10.836188 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836199 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836210 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836221 | orchestrator | 2025-09-20 10:51:10.836232 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-20 10:51:10.836243 | orchestrator | Saturday 20 September 2025 10:49:59 +0000 (0:00:00.341) 0:01:14.597 **** 2025-09-20 10:51:10.836254 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836265 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836275 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836286 | orchestrator | 2025-09-20 10:51:10.836297 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-20 10:51:10.836313 | orchestrator | Saturday 20 September 2025 10:50:00 +0000 (0:00:00.334) 0:01:14.932 **** 2025-09-20 10:51:10.836324 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836335 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836346 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836357 | orchestrator | 2025-09-20 10:51:10.836410 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-20 10:51:10.836423 | orchestrator | Saturday 20 September 2025 10:50:00 +0000 (0:00:00.373) 0:01:15.305 **** 2025-09-20 10:51:10.836435 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836446 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836457 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836468 | orchestrator | 2025-09-20 10:51:10.836479 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-20 10:51:10.836490 | orchestrator | Saturday 20 September 2025 10:50:01 +0000 (0:00:00.553) 0:01:15.859 **** 2025-09-20 10:51:10.836501 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836512 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836523 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836534 | orchestrator | 2025-09-20 10:51:10.836545 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-20 10:51:10.836556 | orchestrator | Saturday 20 September 2025 10:50:01 +0000 (0:00:00.320) 0:01:16.180 **** 2025-09-20 10:51:10.836567 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836578 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836589 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836600 | orchestrator | 2025-09-20 10:51:10.836611 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-20 10:51:10.836622 | orchestrator | Saturday 20 September 2025 10:50:01 +0000 (0:00:00.330) 0:01:16.511 **** 2025-09-20 10:51:10.836633 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836644 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836655 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836666 | orchestrator | 2025-09-20 10:51:10.836677 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-20 10:51:10.836688 | orchestrator | Saturday 20 September 2025 10:50:02 +0000 (0:00:00.319) 0:01:16.830 **** 2025-09-20 10:51:10.836699 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836710 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836721 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836732 | orchestrator | 2025-09-20 10:51:10.836743 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-20 10:51:10.836754 | orchestrator | Saturday 20 September 2025 10:50:02 +0000 (0:00:00.312) 0:01:17.143 **** 2025-09-20 10:51:10.836765 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836775 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836785 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836794 | orchestrator | 2025-09-20 10:51:10.836804 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-20 10:51:10.836820 | orchestrator | Saturday 20 September 2025 10:50:02 +0000 (0:00:00.524) 0:01:17.667 **** 2025-09-20 10:51:10.836830 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836839 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836849 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836859 | orchestrator | 2025-09-20 10:51:10.836869 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-20 10:51:10.836878 | orchestrator | Saturday 20 September 2025 10:50:03 +0000 (0:00:00.290) 0:01:17.958 **** 2025-09-20 10:51:10.836888 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836898 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836908 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836917 | orchestrator | 2025-09-20 10:51:10.836927 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-20 10:51:10.836937 | orchestrator | Saturday 20 September 2025 10:50:03 +0000 (0:00:00.317) 0:01:18.276 **** 2025-09-20 10:51:10.836947 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.836957 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.836972 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.836982 | orchestrator | 2025-09-20 10:51:10.836992 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-20 10:51:10.837002 | orchestrator | Saturday 20 September 2025 10:50:03 +0000 (0:00:00.312) 0:01:18.588 **** 2025-09-20 10:51:10.837012 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:51:10.837022 | orchestrator | 2025-09-20 10:51:10.837031 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-20 10:51:10.837041 | orchestrator | Saturday 20 September 2025 10:50:04 +0000 (0:00:00.822) 0:01:19.411 **** 2025-09-20 10:51:10.837051 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.837060 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.837070 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.837080 | orchestrator | 2025-09-20 10:51:10.837089 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-20 10:51:10.837099 | orchestrator | Saturday 20 September 2025 10:50:05 +0000 (0:00:00.492) 0:01:19.904 **** 2025-09-20 10:51:10.837109 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.837119 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.837128 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.837138 | orchestrator | 2025-09-20 10:51:10.837148 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-20 10:51:10.837157 | orchestrator | Saturday 20 September 2025 10:50:05 +0000 (0:00:00.468) 0:01:20.372 **** 2025-09-20 10:51:10.837167 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.837177 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.837186 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.837196 | orchestrator | 2025-09-20 10:51:10.837206 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-20 10:51:10.837216 | orchestrator | Saturday 20 September 2025 10:50:06 +0000 (0:00:00.553) 0:01:20.925 **** 2025-09-20 10:51:10.837225 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.837235 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.837245 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.837254 | orchestrator | 2025-09-20 10:51:10.837264 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-20 10:51:10.837278 | orchestrator | Saturday 20 September 2025 10:50:06 +0000 (0:00:00.342) 0:01:21.268 **** 2025-09-20 10:51:10.837289 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.837298 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.837308 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.837318 | orchestrator | 2025-09-20 10:51:10.837327 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-20 10:51:10.837337 | orchestrator | Saturday 20 September 2025 10:50:06 +0000 (0:00:00.354) 0:01:21.623 **** 2025-09-20 10:51:10.837352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.837362 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.837386 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.837396 | orchestrator | 2025-09-20 10:51:10.837406 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-20 10:51:10.837416 | orchestrator | Saturday 20 September 2025 10:50:07 +0000 (0:00:00.338) 0:01:21.962 **** 2025-09-20 10:51:10.837426 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.837435 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.837445 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.837455 | orchestrator | 2025-09-20 10:51:10.837464 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-20 10:51:10.837474 | orchestrator | Saturday 20 September 2025 10:50:07 +0000 (0:00:00.517) 0:01:22.479 **** 2025-09-20 10:51:10.837484 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.837494 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.837503 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.837513 | orchestrator | 2025-09-20 10:51:10.837523 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-20 10:51:10.837532 | orchestrator | Saturday 20 September 2025 10:50:08 +0000 (0:00:00.330) 0:01:22.810 **** 2025-09-20 10:51:10.837543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837656 | orchestrator | 2025-09-20 10:51:10.837666 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-20 10:51:10.837676 | orchestrator | Saturday 20 September 2025 10:50:09 +0000 (0:00:01.298) 0:01:24.109 **** 2025-09-20 10:51:10.837743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837862 | orchestrator | 2025-09-20 10:51:10.837872 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-20 10:51:10.837882 | orchestrator | Saturday 20 September 2025 10:50:13 +0000 (0:00:03.808) 0:01:27.917 **** 2025-09-20 10:51:10.837893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.837995 | orchestrator | 2025-09-20 10:51:10.838005 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 10:51:10.838061 | orchestrator | Saturday 20 September 2025 10:50:15 +0000 (0:00:01.836) 0:01:29.754 **** 2025-09-20 10:51:10.838075 | orchestrator | 2025-09-20 10:51:10.838089 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 10:51:10.838099 | orchestrator | Saturday 20 September 2025 10:50:15 +0000 (0:00:00.197) 0:01:29.952 **** 2025-09-20 10:51:10.838109 | orchestrator | 2025-09-20 10:51:10.838119 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 10:51:10.838129 | orchestrator | Saturday 20 September 2025 10:50:15 +0000 (0:00:00.062) 0:01:30.014 **** 2025-09-20 10:51:10.838138 | orchestrator | 2025-09-20 10:51:10.838148 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-20 10:51:10.838158 | orchestrator | Saturday 20 September 2025 10:50:15 +0000 (0:00:00.063) 0:01:30.078 **** 2025-09-20 10:51:10.838168 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.838177 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.838187 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.838197 | orchestrator | 2025-09-20 10:51:10.838207 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-20 10:51:10.838217 | orchestrator | Saturday 20 September 2025 10:50:22 +0000 (0:00:07.170) 0:01:37.249 **** 2025-09-20 10:51:10.838226 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.838236 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.838245 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.838255 | orchestrator | 2025-09-20 10:51:10.838265 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-20 10:51:10.838275 | orchestrator | Saturday 20 September 2025 10:50:29 +0000 (0:00:06.485) 0:01:43.734 **** 2025-09-20 10:51:10.838285 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.838295 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.838304 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.838314 | orchestrator | 2025-09-20 10:51:10.838324 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-20 10:51:10.838334 | orchestrator | Saturday 20 September 2025 10:50:31 +0000 (0:00:02.546) 0:01:46.280 **** 2025-09-20 10:51:10.838343 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.838353 | orchestrator | 2025-09-20 10:51:10.838363 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-20 10:51:10.838387 | orchestrator | Saturday 20 September 2025 10:50:31 +0000 (0:00:00.120) 0:01:46.401 **** 2025-09-20 10:51:10.838398 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.838407 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.838417 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.838427 | orchestrator | 2025-09-20 10:51:10.838437 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-20 10:51:10.838446 | orchestrator | Saturday 20 September 2025 10:50:32 +0000 (0:00:01.124) 0:01:47.526 **** 2025-09-20 10:51:10.838456 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.838473 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.838483 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.838493 | orchestrator | 2025-09-20 10:51:10.838502 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-20 10:51:10.838512 | orchestrator | Saturday 20 September 2025 10:50:33 +0000 (0:00:00.597) 0:01:48.123 **** 2025-09-20 10:51:10.838522 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.838531 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.838541 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.838551 | orchestrator | 2025-09-20 10:51:10.838561 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-20 10:51:10.838570 | orchestrator | Saturday 20 September 2025 10:50:34 +0000 (0:00:00.775) 0:01:48.899 **** 2025-09-20 10:51:10.838580 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.838590 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.838600 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.838609 | orchestrator | 2025-09-20 10:51:10.838619 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-20 10:51:10.838629 | orchestrator | Saturday 20 September 2025 10:50:34 +0000 (0:00:00.679) 0:01:49.578 **** 2025-09-20 10:51:10.838639 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.838649 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.838665 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.838676 | orchestrator | 2025-09-20 10:51:10.838685 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-20 10:51:10.838695 | orchestrator | Saturday 20 September 2025 10:50:35 +0000 (0:00:01.000) 0:01:50.579 **** 2025-09-20 10:51:10.838705 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.838715 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.838725 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.838734 | orchestrator | 2025-09-20 10:51:10.838744 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-20 10:51:10.838754 | orchestrator | Saturday 20 September 2025 10:50:36 +0000 (0:00:00.784) 0:01:51.363 **** 2025-09-20 10:51:10.838764 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.838774 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.838783 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.838793 | orchestrator | 2025-09-20 10:51:10.838803 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-20 10:51:10.838812 | orchestrator | Saturday 20 September 2025 10:50:36 +0000 (0:00:00.304) 0:01:51.668 **** 2025-09-20 10:51:10.838823 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838833 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838848 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838858 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838876 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838897 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838907 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838923 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838933 | orchestrator | 2025-09-20 10:51:10.838943 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-20 10:51:10.838953 | orchestrator | Saturday 20 September 2025 10:50:38 +0000 (0:00:01.506) 0:01:53.175 **** 2025-09-20 10:51:10.838963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838973 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838983 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.838998 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839034 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839064 | orchestrator | 2025-09-20 10:51:10.839074 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-20 10:51:10.839085 | orchestrator | Saturday 20 September 2025 10:50:42 +0000 (0:00:04.462) 0:01:57.638 **** 2025-09-20 10:51:10.839100 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839110 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839120 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839190 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 10:51:10.839200 | orchestrator | 2025-09-20 10:51:10.839211 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 10:51:10.839221 | orchestrator | Saturday 20 September 2025 10:50:45 +0000 (0:00:02.674) 0:02:00.313 **** 2025-09-20 10:51:10.839230 | orchestrator | 2025-09-20 10:51:10.839241 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 10:51:10.839250 | orchestrator | Saturday 20 September 2025 10:50:45 +0000 (0:00:00.085) 0:02:00.398 **** 2025-09-20 10:51:10.839260 | orchestrator | 2025-09-20 10:51:10.839270 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 10:51:10.839280 | orchestrator | Saturday 20 September 2025 10:50:45 +0000 (0:00:00.065) 0:02:00.464 **** 2025-09-20 10:51:10.839290 | orchestrator | 2025-09-20 10:51:10.839300 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-20 10:51:10.839309 | orchestrator | Saturday 20 September 2025 10:50:45 +0000 (0:00:00.079) 0:02:00.543 **** 2025-09-20 10:51:10.839319 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.839329 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.839339 | orchestrator | 2025-09-20 10:51:10.839354 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-20 10:51:10.839364 | orchestrator | Saturday 20 September 2025 10:50:52 +0000 (0:00:06.402) 0:02:06.946 **** 2025-09-20 10:51:10.839386 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.839397 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.839407 | orchestrator | 2025-09-20 10:51:10.839417 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-20 10:51:10.839427 | orchestrator | Saturday 20 September 2025 10:50:58 +0000 (0:00:06.179) 0:02:13.126 **** 2025-09-20 10:51:10.839436 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:51:10.839446 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:51:10.839456 | orchestrator | 2025-09-20 10:51:10.839465 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-20 10:51:10.839482 | orchestrator | Saturday 20 September 2025 10:51:05 +0000 (0:00:06.591) 0:02:19.718 **** 2025-09-20 10:51:10.839491 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:51:10.839501 | orchestrator | 2025-09-20 10:51:10.839511 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-20 10:51:10.839521 | orchestrator | Saturday 20 September 2025 10:51:05 +0000 (0:00:00.124) 0:02:19.843 **** 2025-09-20 10:51:10.839531 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.839540 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.839550 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.839560 | orchestrator | 2025-09-20 10:51:10.839570 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-20 10:51:10.839579 | orchestrator | Saturday 20 September 2025 10:51:05 +0000 (0:00:00.754) 0:02:20.597 **** 2025-09-20 10:51:10.839589 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.839599 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.839608 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.839618 | orchestrator | 2025-09-20 10:51:10.839628 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-20 10:51:10.839637 | orchestrator | Saturday 20 September 2025 10:51:06 +0000 (0:00:00.542) 0:02:21.140 **** 2025-09-20 10:51:10.839647 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.839657 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.839666 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.839676 | orchestrator | 2025-09-20 10:51:10.839690 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-20 10:51:10.839700 | orchestrator | Saturday 20 September 2025 10:51:07 +0000 (0:00:00.734) 0:02:21.875 **** 2025-09-20 10:51:10.839710 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:51:10.839719 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:51:10.839729 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:51:10.839738 | orchestrator | 2025-09-20 10:51:10.839748 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-20 10:51:10.839758 | orchestrator | Saturday 20 September 2025 10:51:07 +0000 (0:00:00.696) 0:02:22.571 **** 2025-09-20 10:51:10.839768 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.839778 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.839787 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.839797 | orchestrator | 2025-09-20 10:51:10.839807 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-20 10:51:10.839817 | orchestrator | Saturday 20 September 2025 10:51:08 +0000 (0:00:00.732) 0:02:23.303 **** 2025-09-20 10:51:10.839827 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:51:10.839837 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:51:10.839846 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:51:10.839856 | orchestrator | 2025-09-20 10:51:10.839866 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:51:10.839876 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-20 10:51:10.839886 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-20 10:51:10.839896 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-20 10:51:10.839906 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:51:10.839916 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:51:10.839925 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:51:10.839944 | orchestrator | 2025-09-20 10:51:10.839954 | orchestrator | 2025-09-20 10:51:10.839964 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:51:10.839973 | orchestrator | Saturday 20 September 2025 10:51:09 +0000 (0:00:00.854) 0:02:24.158 **** 2025-09-20 10:51:10.839983 | orchestrator | =============================================================================== 2025-09-20 10:51:10.839993 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.17s 2025-09-20 10:51:10.840003 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.24s 2025-09-20 10:51:10.840013 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.57s 2025-09-20 10:51:10.840022 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.67s 2025-09-20 10:51:10.840032 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.14s 2025-09-20 10:51:10.840042 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.46s 2025-09-20 10:51:10.840052 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.81s 2025-09-20 10:51:10.840066 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.17s 2025-09-20 10:51:10.840077 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.87s 2025-09-20 10:51:10.840086 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.67s 2025-09-20 10:51:10.840096 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.09s 2025-09-20 10:51:10.840106 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.06s 2025-09-20 10:51:10.840116 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.91s 2025-09-20 10:51:10.840126 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.84s 2025-09-20 10:51:10.840135 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.62s 2025-09-20 10:51:10.840145 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-09-20 10:51:10.840155 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.44s 2025-09-20 10:51:10.840164 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.30s 2025-09-20 10:51:10.840174 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2025-09-20 10:51:10.840184 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.12s 2025-09-20 10:51:10.840194 | orchestrator | 2025-09-20 10:51:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:13.877744 | orchestrator | 2025-09-20 10:51:13 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:13.879748 | orchestrator | 2025-09-20 10:51:13 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:13.881562 | orchestrator | 2025-09-20 10:51:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:16.922316 | orchestrator | 2025-09-20 10:51:16 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:16.922445 | orchestrator | 2025-09-20 10:51:16 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:16.922462 | orchestrator | 2025-09-20 10:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:19.964993 | orchestrator | 2025-09-20 10:51:19 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:19.965585 | orchestrator | 2025-09-20 10:51:19 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:19.965619 | orchestrator | 2025-09-20 10:51:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:23.024454 | orchestrator | 2025-09-20 10:51:23 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:23.026744 | orchestrator | 2025-09-20 10:51:23 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:23.026787 | orchestrator | 2025-09-20 10:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:26.076737 | orchestrator | 2025-09-20 10:51:26 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:26.079420 | orchestrator | 2025-09-20 10:51:26 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:26.080136 | orchestrator | 2025-09-20 10:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:29.128514 | orchestrator | 2025-09-20 10:51:29 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:29.130847 | orchestrator | 2025-09-20 10:51:29 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:29.131158 | orchestrator | 2025-09-20 10:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:32.172422 | orchestrator | 2025-09-20 10:51:32 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:32.172657 | orchestrator | 2025-09-20 10:51:32 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:32.172674 | orchestrator | 2025-09-20 10:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:35.198260 | orchestrator | 2025-09-20 10:51:35 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:35.199137 | orchestrator | 2025-09-20 10:51:35 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:35.199175 | orchestrator | 2025-09-20 10:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:38.231968 | orchestrator | 2025-09-20 10:51:38 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:38.232304 | orchestrator | 2025-09-20 10:51:38 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:38.232370 | orchestrator | 2025-09-20 10:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:41.277178 | orchestrator | 2025-09-20 10:51:41 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:41.278561 | orchestrator | 2025-09-20 10:51:41 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:41.278934 | orchestrator | 2025-09-20 10:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:44.320947 | orchestrator | 2025-09-20 10:51:44 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:44.322570 | orchestrator | 2025-09-20 10:51:44 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:44.322604 | orchestrator | 2025-09-20 10:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:47.361189 | orchestrator | 2025-09-20 10:51:47 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:47.361245 | orchestrator | 2025-09-20 10:51:47 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:47.361253 | orchestrator | 2025-09-20 10:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:50.395354 | orchestrator | 2025-09-20 10:51:50 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:50.396099 | orchestrator | 2025-09-20 10:51:50 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:50.397234 | orchestrator | 2025-09-20 10:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:53.438893 | orchestrator | 2025-09-20 10:51:53 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:53.438978 | orchestrator | 2025-09-20 10:51:53 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:53.438992 | orchestrator | 2025-09-20 10:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:56.478836 | orchestrator | 2025-09-20 10:51:56 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:56.480537 | orchestrator | 2025-09-20 10:51:56 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:56.480816 | orchestrator | 2025-09-20 10:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:51:59.532780 | orchestrator | 2025-09-20 10:51:59 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:51:59.533011 | orchestrator | 2025-09-20 10:51:59 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:51:59.533031 | orchestrator | 2025-09-20 10:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:02.570852 | orchestrator | 2025-09-20 10:52:02 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:02.571375 | orchestrator | 2025-09-20 10:52:02 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:02.571406 | orchestrator | 2025-09-20 10:52:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:05.612719 | orchestrator | 2025-09-20 10:52:05 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:05.613443 | orchestrator | 2025-09-20 10:52:05 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:05.613469 | orchestrator | 2025-09-20 10:52:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:08.657428 | orchestrator | 2025-09-20 10:52:08 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:08.659938 | orchestrator | 2025-09-20 10:52:08 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:08.659968 | orchestrator | 2025-09-20 10:52:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:11.691749 | orchestrator | 2025-09-20 10:52:11 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:11.693374 | orchestrator | 2025-09-20 10:52:11 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:11.693407 | orchestrator | 2025-09-20 10:52:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:14.726643 | orchestrator | 2025-09-20 10:52:14 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:14.728677 | orchestrator | 2025-09-20 10:52:14 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:14.728711 | orchestrator | 2025-09-20 10:52:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:17.775372 | orchestrator | 2025-09-20 10:52:17 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:17.778011 | orchestrator | 2025-09-20 10:52:17 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:17.778448 | orchestrator | 2025-09-20 10:52:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:20.832677 | orchestrator | 2025-09-20 10:52:20 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:20.833019 | orchestrator | 2025-09-20 10:52:20 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:20.833076 | orchestrator | 2025-09-20 10:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:23.876724 | orchestrator | 2025-09-20 10:52:23 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:23.877983 | orchestrator | 2025-09-20 10:52:23 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:23.878066 | orchestrator | 2025-09-20 10:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:26.920329 | orchestrator | 2025-09-20 10:52:26 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:26.921913 | orchestrator | 2025-09-20 10:52:26 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:26.922184 | orchestrator | 2025-09-20 10:52:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:29.966829 | orchestrator | 2025-09-20 10:52:29 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:29.969460 | orchestrator | 2025-09-20 10:52:29 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:29.969493 | orchestrator | 2025-09-20 10:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:33.009811 | orchestrator | 2025-09-20 10:52:33 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:33.010883 | orchestrator | 2025-09-20 10:52:33 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:33.010919 | orchestrator | 2025-09-20 10:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:36.053896 | orchestrator | 2025-09-20 10:52:36 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:36.056315 | orchestrator | 2025-09-20 10:52:36 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:36.056743 | orchestrator | 2025-09-20 10:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:39.091490 | orchestrator | 2025-09-20 10:52:39 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:39.091593 | orchestrator | 2025-09-20 10:52:39 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:39.091606 | orchestrator | 2025-09-20 10:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:42.130844 | orchestrator | 2025-09-20 10:52:42 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:42.132688 | orchestrator | 2025-09-20 10:52:42 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:42.133173 | orchestrator | 2025-09-20 10:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:45.170338 | orchestrator | 2025-09-20 10:52:45 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:45.171133 | orchestrator | 2025-09-20 10:52:45 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:45.171636 | orchestrator | 2025-09-20 10:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:48.210809 | orchestrator | 2025-09-20 10:52:48 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:48.213622 | orchestrator | 2025-09-20 10:52:48 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:48.213897 | orchestrator | 2025-09-20 10:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:51.259686 | orchestrator | 2025-09-20 10:52:51 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:51.259827 | orchestrator | 2025-09-20 10:52:51 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:51.259844 | orchestrator | 2025-09-20 10:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:54.294708 | orchestrator | 2025-09-20 10:52:54 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:54.296658 | orchestrator | 2025-09-20 10:52:54 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:54.296692 | orchestrator | 2025-09-20 10:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:52:57.336446 | orchestrator | 2025-09-20 10:52:57 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:52:57.339466 | orchestrator | 2025-09-20 10:52:57 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:52:57.339936 | orchestrator | 2025-09-20 10:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:00.389171 | orchestrator | 2025-09-20 10:53:00 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:00.389496 | orchestrator | 2025-09-20 10:53:00 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:00.389515 | orchestrator | 2025-09-20 10:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:03.434389 | orchestrator | 2025-09-20 10:53:03 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:03.436049 | orchestrator | 2025-09-20 10:53:03 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:03.436505 | orchestrator | 2025-09-20 10:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:06.475467 | orchestrator | 2025-09-20 10:53:06 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:06.477708 | orchestrator | 2025-09-20 10:53:06 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:06.477749 | orchestrator | 2025-09-20 10:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:09.509362 | orchestrator | 2025-09-20 10:53:09 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:09.511891 | orchestrator | 2025-09-20 10:53:09 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:09.511944 | orchestrator | 2025-09-20 10:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:12.559066 | orchestrator | 2025-09-20 10:53:12 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:12.560809 | orchestrator | 2025-09-20 10:53:12 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:12.560848 | orchestrator | 2025-09-20 10:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:15.587581 | orchestrator | 2025-09-20 10:53:15 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:15.587690 | orchestrator | 2025-09-20 10:53:15 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:15.587705 | orchestrator | 2025-09-20 10:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:18.643491 | orchestrator | 2025-09-20 10:53:18 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:18.645842 | orchestrator | 2025-09-20 10:53:18 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:18.645875 | orchestrator | 2025-09-20 10:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:21.686453 | orchestrator | 2025-09-20 10:53:21 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:21.687120 | orchestrator | 2025-09-20 10:53:21 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:21.687155 | orchestrator | 2025-09-20 10:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:24.721161 | orchestrator | 2025-09-20 10:53:24 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:24.721323 | orchestrator | 2025-09-20 10:53:24 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:24.721340 | orchestrator | 2025-09-20 10:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:27.777101 | orchestrator | 2025-09-20 10:53:27 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:27.777497 | orchestrator | 2025-09-20 10:53:27 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:27.777764 | orchestrator | 2025-09-20 10:53:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:30.817767 | orchestrator | 2025-09-20 10:53:30 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:30.817876 | orchestrator | 2025-09-20 10:53:30 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:30.817890 | orchestrator | 2025-09-20 10:53:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:33.872540 | orchestrator | 2025-09-20 10:53:33 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:33.874364 | orchestrator | 2025-09-20 10:53:33 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:33.874850 | orchestrator | 2025-09-20 10:53:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:36.915653 | orchestrator | 2025-09-20 10:53:36 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:36.917175 | orchestrator | 2025-09-20 10:53:36 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:36.917229 | orchestrator | 2025-09-20 10:53:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:39.956101 | orchestrator | 2025-09-20 10:53:39 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:39.956259 | orchestrator | 2025-09-20 10:53:39 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:39.956276 | orchestrator | 2025-09-20 10:53:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:42.996758 | orchestrator | 2025-09-20 10:53:42 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:42.998117 | orchestrator | 2025-09-20 10:53:42 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:42.998311 | orchestrator | 2025-09-20 10:53:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:46.038745 | orchestrator | 2025-09-20 10:53:46 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:46.038968 | orchestrator | 2025-09-20 10:53:46 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state STARTED 2025-09-20 10:53:46.039870 | orchestrator | 2025-09-20 10:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:49.092428 | orchestrator | 2025-09-20 10:53:49 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:49.101232 | orchestrator | 2025-09-20 10:53:49 | INFO  | Task 65ca3d61-f412-45f3-9ea1-23bb552ae72f is in state SUCCESS 2025-09-20 10:53:49.102654 | orchestrator | 2025-09-20 10:53:49.102729 | orchestrator | 2025-09-20 10:53:49.102743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:53:49.102756 | orchestrator | 2025-09-20 10:53:49.102767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:53:49.102779 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:00.477) 0:00:00.477 **** 2025-09-20 10:53:49.102791 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.102804 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.102816 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.102827 | orchestrator | 2025-09-20 10:53:49.102838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:53:49.102849 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:00.474) 0:00:00.952 **** 2025-09-20 10:53:49.102860 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-20 10:53:49.102871 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-20 10:53:49.102882 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-20 10:53:49.102893 | orchestrator | 2025-09-20 10:53:49.102903 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-20 10:53:49.102914 | orchestrator | 2025-09-20 10:53:49.102925 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-20 10:53:49.102936 | orchestrator | Saturday 20 September 2025 10:47:43 +0000 (0:00:00.847) 0:00:01.799 **** 2025-09-20 10:53:49.103034 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.103047 | orchestrator | 2025-09-20 10:53:49.103058 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-20 10:53:49.103069 | orchestrator | Saturday 20 September 2025 10:47:44 +0000 (0:00:01.015) 0:00:02.814 **** 2025-09-20 10:53:49.103079 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.103090 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.103101 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.103112 | orchestrator | 2025-09-20 10:53:49.103123 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-20 10:53:49.103134 | orchestrator | Saturday 20 September 2025 10:47:44 +0000 (0:00:00.810) 0:00:03.624 **** 2025-09-20 10:53:49.103145 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.103155 | orchestrator | 2025-09-20 10:53:49.103166 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-20 10:53:49.103202 | orchestrator | Saturday 20 September 2025 10:47:45 +0000 (0:00:00.926) 0:00:04.551 **** 2025-09-20 10:53:49.103216 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.103539 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.103555 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.103568 | orchestrator | 2025-09-20 10:53:49.103578 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-20 10:53:49.103590 | orchestrator | Saturday 20 September 2025 10:47:47 +0000 (0:00:01.645) 0:00:06.196 **** 2025-09-20 10:53:49.103668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-20 10:53:49.103679 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-20 10:53:49.103690 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-20 10:53:49.103701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-20 10:53:49.103712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-20 10:53:49.103723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-20 10:53:49.103733 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-20 10:53:49.103745 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-20 10:53:49.103767 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-20 10:53:49.103779 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-20 10:53:49.103790 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-20 10:53:49.103801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-20 10:53:49.103911 | orchestrator | 2025-09-20 10:53:49.103923 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-20 10:53:49.103934 | orchestrator | Saturday 20 September 2025 10:47:49 +0000 (0:00:02.376) 0:00:08.574 **** 2025-09-20 10:53:49.103945 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-20 10:53:49.103956 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-20 10:53:49.103983 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-20 10:53:49.103994 | orchestrator | 2025-09-20 10:53:49.104005 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-20 10:53:49.104016 | orchestrator | Saturday 20 September 2025 10:47:50 +0000 (0:00:01.065) 0:00:09.640 **** 2025-09-20 10:53:49.104027 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-20 10:53:49.104038 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-20 10:53:49.104049 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-20 10:53:49.104060 | orchestrator | 2025-09-20 10:53:49.104071 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-20 10:53:49.104082 | orchestrator | Saturday 20 September 2025 10:47:52 +0000 (0:00:01.320) 0:00:10.960 **** 2025-09-20 10:53:49.104093 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-20 10:53:49.104104 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.104127 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-20 10:53:49.104139 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.104150 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-20 10:53:49.104160 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.104507 | orchestrator | 2025-09-20 10:53:49.104530 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-20 10:53:49.104541 | orchestrator | Saturday 20 September 2025 10:47:53 +0000 (0:00:00.925) 0:00:11.886 **** 2025-09-20 10:53:49.104556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.104573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.104585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.104608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.104628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.104677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.104775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.104788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.104799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.104811 | orchestrator | 2025-09-20 10:53:49.104822 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-20 10:53:49.104910 | orchestrator | Saturday 20 September 2025 10:47:55 +0000 (0:00:01.919) 0:00:13.805 **** 2025-09-20 10:53:49.104924 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.104952 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.104964 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.104975 | orchestrator | 2025-09-20 10:53:49.104986 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-20 10:53:49.104997 | orchestrator | Saturday 20 September 2025 10:47:56 +0000 (0:00:01.419) 0:00:15.225 **** 2025-09-20 10:53:49.105008 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-20 10:53:49.105019 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-20 10:53:49.105030 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-20 10:53:49.105041 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-20 10:53:49.105052 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-20 10:53:49.105063 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-20 10:53:49.105074 | orchestrator | 2025-09-20 10:53:49.105085 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-20 10:53:49.105096 | orchestrator | Saturday 20 September 2025 10:47:58 +0000 (0:00:01.944) 0:00:17.170 **** 2025-09-20 10:53:49.105107 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.105118 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.105131 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.105142 | orchestrator | 2025-09-20 10:53:49.105155 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-20 10:53:49.105167 | orchestrator | Saturday 20 September 2025 10:48:00 +0000 (0:00:01.719) 0:00:18.889 **** 2025-09-20 10:53:49.105202 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.105215 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.105233 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.105252 | orchestrator | 2025-09-20 10:53:49.105271 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-20 10:53:49.105288 | orchestrator | Saturday 20 September 2025 10:48:03 +0000 (0:00:03.085) 0:00:21.975 **** 2025-09-20 10:53:49.105317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.105394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.105410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.105841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 10:53:49.105867 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.105879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.105892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.105904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.105920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.106011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 10:53:49.106093 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.106591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.106612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.106646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 10:53:49.106658 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.106667 | orchestrator | 2025-09-20 10:53:49.106677 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-20 10:53:49.106688 | orchestrator | Saturday 20 September 2025 10:48:03 +0000 (0:00:00.753) 0:00:22.729 **** 2025-09-20 10:53:49.106698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.106848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.106896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.106985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.107018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 10:53:49.107028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.107054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 10:53:49.107098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.107133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837', '__omit_place_holder__35d1280a3092cf0af59aec01a638a57dbbec8837'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 10:53:49.107173 | orchestrator | 2025-09-20 10:53:49.107206 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-20 10:53:49.107217 | orchestrator | Saturday 20 September 2025 10:48:06 +0000 (0:00:02.851) 0:00:25.581 **** 2025-09-20 10:53:49.107227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.107341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.107353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.107365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.108052 | orchestrator | 2025-09-20 10:53:49.108345 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-20 10:53:49.108363 | orchestrator | Saturday 20 September 2025 10:48:09 +0000 (0:00:03.060) 0:00:28.641 **** 2025-09-20 10:53:49.108373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-20 10:53:49.108446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-20 10:53:49.108458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-20 10:53:49.108479 | orchestrator | 2025-09-20 10:53:49.108489 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-20 10:53:49.108499 | orchestrator | Saturday 20 September 2025 10:48:13 +0000 (0:00:03.815) 0:00:32.457 **** 2025-09-20 10:53:49.108509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-20 10:53:49.108520 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-20 10:53:49.108559 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-20 10:53:49.108570 | orchestrator | 2025-09-20 10:53:49.108659 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-20 10:53:49.108748 | orchestrator | Saturday 20 September 2025 10:48:18 +0000 (0:00:05.253) 0:00:37.710 **** 2025-09-20 10:53:49.108759 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.108770 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.108779 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.108789 | orchestrator | 2025-09-20 10:53:49.108799 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-20 10:53:49.108809 | orchestrator | Saturday 20 September 2025 10:48:20 +0000 (0:00:01.118) 0:00:38.828 **** 2025-09-20 10:53:49.108844 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-20 10:53:49.108855 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-20 10:53:49.108865 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-20 10:53:49.108874 | orchestrator | 2025-09-20 10:53:49.108884 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-20 10:53:49.108894 | orchestrator | Saturday 20 September 2025 10:48:22 +0000 (0:00:02.824) 0:00:41.653 **** 2025-09-20 10:53:49.108904 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-20 10:53:49.108914 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-20 10:53:49.110087 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-20 10:53:49.110117 | orchestrator | 2025-09-20 10:53:49.110133 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-20 10:53:49.110148 | orchestrator | Saturday 20 September 2025 10:48:25 +0000 (0:00:02.979) 0:00:44.632 **** 2025-09-20 10:53:49.110163 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-20 10:53:49.110217 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-20 10:53:49.110298 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-20 10:53:49.110316 | orchestrator | 2025-09-20 10:53:49.110326 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-20 10:53:49.110336 | orchestrator | Saturday 20 September 2025 10:48:27 +0000 (0:00:01.917) 0:00:46.550 **** 2025-09-20 10:53:49.110346 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-20 10:53:49.110356 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-20 10:53:49.110365 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-20 10:53:49.110375 | orchestrator | 2025-09-20 10:53:49.110385 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-20 10:53:49.110395 | orchestrator | Saturday 20 September 2025 10:48:29 +0000 (0:00:01.630) 0:00:48.180 **** 2025-09-20 10:53:49.110404 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.110427 | orchestrator | 2025-09-20 10:53:49.111062 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-20 10:53:49.111091 | orchestrator | Saturday 20 September 2025 10:48:30 +0000 (0:00:00.628) 0:00:48.809 **** 2025-09-20 10:53:49.111109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.111135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.111713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.111743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.111753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.111763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.111787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.111797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.111815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.111826 | orchestrator | 2025-09-20 10:53:49.111836 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-20 10:53:49.111846 | orchestrator | Saturday 20 September 2025 10:48:33 +0000 (0:00:03.806) 0:00:52.616 **** 2025-09-20 10:53:49.112059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.112088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.112155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.112173 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.112263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.112286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.112303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.112313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.112323 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.112411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.112426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.112436 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.112446 | orchestrator | 2025-09-20 10:53:49.112456 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-20 10:53:49.112903 | orchestrator | Saturday 20 September 2025 10:48:34 +0000 (0:00:00.735) 0:00:53.351 **** 2025-09-20 10:53:49.112928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.112951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.112962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.112972 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.112989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.113138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.113156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.113166 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.113243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.113268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.113278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.113288 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.113298 | orchestrator | 2025-09-20 10:53:49.113308 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-20 10:53:49.113318 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:00.791) 0:00:54.143 **** 2025-09-20 10:53:49.113334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.113418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.113433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.113444 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.113454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.113471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.113482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.113491 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.113499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.113523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114348 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.114361 | orchestrator | 2025-09-20 10:53:49.114372 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-20 10:53:49.114382 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:00.603) 0:00:54.747 **** 2025-09-20 10:53:49.114391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114448 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.114458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114498 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.114529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114577 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.114605 | orchestrator | 2025-09-20 10:53:49.114615 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-20 10:53:49.114624 | orchestrator | Saturday 20 September 2025 10:48:36 +0000 (0:00:00.571) 0:00:55.319 **** 2025-09-20 10:53:49.114642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114675 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.114692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114725 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.114734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114761 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.114770 | orchestrator | 2025-09-20 10:53:49.114779 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-20 10:53:49.114788 | orchestrator | Saturday 20 September 2025 10:48:37 +0000 (0:00:00.750) 0:00:56.069 **** 2025-09-20 10:53:49.114801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114841 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.114850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114877 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.114890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114929 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.114938 | orchestrator | 2025-09-20 10:53:49.114947 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-20 10:53:49.114956 | orchestrator | Saturday 20 September 2025 10:48:38 +0000 (0:00:00.718) 0:00:56.787 **** 2025-09-20 10:53:49.114965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.114974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.114983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.114992 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.115001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.115015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.115036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.115045 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.115054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.115063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.115072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.115081 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.115090 | orchestrator | 2025-09-20 10:53:49.115099 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-20 10:53:49.115108 | orchestrator | Saturday 20 September 2025 10:48:38 +0000 (0:00:00.552) 0:00:57.339 **** 2025-09-20 10:53:49.115117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.115139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.115149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.115158 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.115172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.115207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.115217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.115226 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.115235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 10:53:49.115244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 10:53:49.115268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 10:53:49.115278 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.115287 | orchestrator | 2025-09-20 10:53:49.115296 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-20 10:53:49.115305 | orchestrator | Saturday 20 September 2025 10:48:39 +0000 (0:00:01.047) 0:00:58.387 **** 2025-09-20 10:53:49.115313 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-20 10:53:49.115323 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-20 10:53:49.115337 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-20 10:53:49.115346 | orchestrator | 2025-09-20 10:53:49.115355 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-20 10:53:49.115364 | orchestrator | Saturday 20 September 2025 10:48:41 +0000 (0:00:02.183) 0:01:00.571 **** 2025-09-20 10:53:49.115372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-20 10:53:49.115381 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-20 10:53:49.115390 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-20 10:53:49.115399 | orchestrator | 2025-09-20 10:53:49.115407 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-20 10:53:49.115416 | orchestrator | Saturday 20 September 2025 10:48:44 +0000 (0:00:02.839) 0:01:03.410 **** 2025-09-20 10:53:49.115425 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 10:53:49.115434 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 10:53:49.115442 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.115451 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 10:53:49.115460 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 10:53:49.115468 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 10:53:49.115477 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.115486 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 10:53:49.115494 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.115503 | orchestrator | 2025-09-20 10:53:49.115512 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-20 10:53:49.115521 | orchestrator | Saturday 20 September 2025 10:48:46 +0000 (0:00:01.840) 0:01:05.250 **** 2025-09-20 10:53:49.115529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.115544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.115558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.115574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 10:53:49.115583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.115593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.115602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 10:53:49.115616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.115626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 10:53:49.115635 | orchestrator | 2025-09-20 10:53:49.115644 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-20 10:53:49.115652 | orchestrator | Saturday 20 September 2025 10:48:50 +0000 (0:00:04.009) 0:01:09.260 **** 2025-09-20 10:53:49.115661 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.115670 | orchestrator | 2025-09-20 10:53:49.115679 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-20 10:53:49.115688 | orchestrator | Saturday 20 September 2025 10:48:50 +0000 (0:00:00.495) 0:01:09.756 **** 2025-09-20 10:53:49.115697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-20 10:53:49.115713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.115774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-20 10:53:49.115816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.115830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-20 10:53:49.115866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.115880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115899 | orchestrator | 2025-09-20 10:53:49.115908 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-20 10:53:49.115917 | orchestrator | Saturday 20 September 2025 10:48:56 +0000 (0:00:05.030) 0:01:14.786 **** 2025-09-20 10:53:49.115930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-20 10:53:49.115946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-20 10:53:49.115955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.115969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.115978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.115997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116010 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.116019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116028 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.116044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-20 10:53:49.116059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.116068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116086 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.116095 | orchestrator | 2025-09-20 10:53:49.116104 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-20 10:53:49.116113 | orchestrator | Saturday 20 September 2025 10:48:57 +0000 (0:00:01.409) 0:01:16.196 **** 2025-09-20 10:53:49.116122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-20 10:53:49.116133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-20 10:53:49.116142 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.116151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-20 10:53:49.116164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-20 10:53:49.116173 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.116199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-20 10:53:49.116208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-20 10:53:49.116218 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.116227 | orchestrator | 2025-09-20 10:53:49.116241 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-20 10:53:49.116251 | orchestrator | Saturday 20 September 2025 10:48:59 +0000 (0:00:02.318) 0:01:18.514 **** 2025-09-20 10:53:49.116263 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.116272 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.116281 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.116290 | orchestrator | 2025-09-20 10:53:49.116299 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-20 10:53:49.116308 | orchestrator | Saturday 20 September 2025 10:49:01 +0000 (0:00:01.566) 0:01:20.080 **** 2025-09-20 10:53:49.116317 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.116326 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.116335 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.116343 | orchestrator | 2025-09-20 10:53:49.116352 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-20 10:53:49.116361 | orchestrator | Saturday 20 September 2025 10:49:04 +0000 (0:00:02.931) 0:01:23.012 **** 2025-09-20 10:53:49.116370 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.116379 | orchestrator | 2025-09-20 10:53:49.116388 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-20 10:53:49.116396 | orchestrator | Saturday 20 September 2025 10:49:05 +0000 (0:00:00.811) 0:01:23.824 **** 2025-09-20 10:53:49.116406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.116416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.116459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.116488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116506 | orchestrator | 2025-09-20 10:53:49.116515 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-20 10:53:49.116533 | orchestrator | Saturday 20 September 2025 10:49:08 +0000 (0:00:03.463) 0:01:27.288 **** 2025-09-20 10:53:49.116548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.116558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116576 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.116586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.116595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116626 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.116641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.116651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.116669 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.116678 | orchestrator | 2025-09-20 10:53:49.116687 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-20 10:53:49.116697 | orchestrator | Saturday 20 September 2025 10:49:09 +0000 (0:00:00.759) 0:01:28.047 **** 2025-09-20 10:53:49.116706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 10:53:49.116716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 10:53:49.116725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 10:53:49.116734 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.116749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 10:53:49.116758 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.116767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 10:53:49.116780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 10:53:49.116789 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.116798 | orchestrator | 2025-09-20 10:53:49.116807 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-20 10:53:49.116816 | orchestrator | Saturday 20 September 2025 10:49:10 +0000 (0:00:01.046) 0:01:29.094 **** 2025-09-20 10:53:49.116825 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.116834 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.116843 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.116851 | orchestrator | 2025-09-20 10:53:49.116860 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-20 10:53:49.116869 | orchestrator | Saturday 20 September 2025 10:49:11 +0000 (0:00:01.435) 0:01:30.529 **** 2025-09-20 10:53:49.116878 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.116887 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.116896 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.116905 | orchestrator | 2025-09-20 10:53:49.118132 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-20 10:53:49.118215 | orchestrator | Saturday 20 September 2025 10:49:13 +0000 (0:00:02.081) 0:01:32.611 **** 2025-09-20 10:53:49.118251 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.118267 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.118281 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.118296 | orchestrator | 2025-09-20 10:53:49.118312 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-20 10:53:49.118357 | orchestrator | Saturday 20 September 2025 10:49:14 +0000 (0:00:00.304) 0:01:32.915 **** 2025-09-20 10:53:49.118372 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.118385 | orchestrator | 2025-09-20 10:53:49.118425 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-20 10:53:49.118441 | orchestrator | Saturday 20 September 2025 10:49:14 +0000 (0:00:00.755) 0:01:33.671 **** 2025-09-20 10:53:49.118458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-20 10:53:49.118476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-20 10:53:49.118509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-20 10:53:49.118524 | orchestrator | 2025-09-20 10:53:49.118539 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-20 10:53:49.118548 | orchestrator | Saturday 20 September 2025 10:49:17 +0000 (0:00:02.466) 0:01:36.137 **** 2025-09-20 10:53:49.118583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-20 10:53:49.118593 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.118602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-20 10:53:49.118611 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.118621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-20 10:53:49.118635 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.118644 | orchestrator | 2025-09-20 10:53:49.118653 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-20 10:53:49.118662 | orchestrator | Saturday 20 September 2025 10:49:18 +0000 (0:00:01.257) 0:01:37.395 **** 2025-09-20 10:53:49.118672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 10:53:49.118682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 10:53:49.118693 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.118702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 10:53:49.118715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 10:53:49.118724 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.118740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 10:53:49.118750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 10:53:49.118759 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.118768 | orchestrator | 2025-09-20 10:53:49.118777 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-20 10:53:49.118786 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:01.848) 0:01:39.244 **** 2025-09-20 10:53:49.118794 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.118803 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.118812 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.118820 | orchestrator | 2025-09-20 10:53:49.118829 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-20 10:53:49.118838 | orchestrator | Saturday 20 September 2025 10:49:21 +0000 (0:00:00.793) 0:01:40.038 **** 2025-09-20 10:53:49.118852 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.118861 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.118869 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.118878 | orchestrator | 2025-09-20 10:53:49.118887 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-20 10:53:49.118896 | orchestrator | Saturday 20 September 2025 10:49:22 +0000 (0:00:01.255) 0:01:41.293 **** 2025-09-20 10:53:49.118904 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.118913 | orchestrator | 2025-09-20 10:53:49.118941 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-20 10:53:49.118959 | orchestrator | Saturday 20 September 2025 10:49:23 +0000 (0:00:00.766) 0:01:42.060 **** 2025-09-20 10:53:49.118969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.118979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.118992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.119078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.119125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119158 | orchestrator | 2025-09-20 10:53:49.119167 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-20 10:53:49.119206 | orchestrator | Saturday 20 September 2025 10:49:27 +0000 (0:00:04.349) 0:01:46.409 **** 2025-09-20 10:53:49.119223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.119233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.119264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119291 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.119304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.119319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119374 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.119387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119396 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.119405 | orchestrator | 2025-09-20 10:53:49.119414 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-20 10:53:49.119423 | orchestrator | Saturday 20 September 2025 10:49:28 +0000 (0:00:00.915) 0:01:47.325 **** 2025-09-20 10:53:49.119442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 10:53:49.119456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 10:53:49.119466 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.119475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 10:53:49.119484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 10:53:49.119493 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.119501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 10:53:49.119510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 10:53:49.119519 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.119528 | orchestrator | 2025-09-20 10:53:49.119537 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-20 10:53:49.119546 | orchestrator | Saturday 20 September 2025 10:49:29 +0000 (0:00:00.828) 0:01:48.154 **** 2025-09-20 10:53:49.119554 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.119563 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.119572 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.119580 | orchestrator | 2025-09-20 10:53:49.119589 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-20 10:53:49.119598 | orchestrator | Saturday 20 September 2025 10:49:30 +0000 (0:00:01.315) 0:01:49.469 **** 2025-09-20 10:53:49.119607 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.119615 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.119624 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.119633 | orchestrator | 2025-09-20 10:53:49.119642 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-20 10:53:49.119651 | orchestrator | Saturday 20 September 2025 10:49:32 +0000 (0:00:01.906) 0:01:51.376 **** 2025-09-20 10:53:49.119659 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.119668 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.119677 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.119685 | orchestrator | 2025-09-20 10:53:49.119694 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-20 10:53:49.119703 | orchestrator | Saturday 20 September 2025 10:49:33 +0000 (0:00:00.417) 0:01:51.793 **** 2025-09-20 10:53:49.119712 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.119720 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.119729 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.119737 | orchestrator | 2025-09-20 10:53:49.119746 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-20 10:53:49.119755 | orchestrator | Saturday 20 September 2025 10:49:33 +0000 (0:00:00.270) 0:01:52.064 **** 2025-09-20 10:53:49.119763 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.119772 | orchestrator | 2025-09-20 10:53:49.119781 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-20 10:53:49.119790 | orchestrator | Saturday 20 September 2025 10:49:34 +0000 (0:00:00.707) 0:01:52.772 **** 2025-09-20 10:53:49.119807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 10:53:49.119822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 10:53:49.119831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 10:53:49.119850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 10:53:49.119873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 10:53:49.119977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 10:53:49.119987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.119996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120039 | orchestrator | 2025-09-20 10:53:49.120048 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-20 10:53:49.120060 | orchestrator | Saturday 20 September 2025 10:49:37 +0000 (0:00:03.422) 0:01:56.194 **** 2025-09-20 10:53:49.120082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 10:53:49.120097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 10:53:49.120112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 10:53:49.120266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120280 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.120295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 10:53:49.120321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120455 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.120464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 10:53:49.120480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 10:53:49.120489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.120544 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.120553 | orchestrator | 2025-09-20 10:53:49.120562 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-20 10:53:49.120571 | orchestrator | Saturday 20 September 2025 10:49:38 +0000 (0:00:00.803) 0:01:56.998 **** 2025-09-20 10:53:49.120585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-20 10:53:49.120594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-20 10:53:49.120604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-20 10:53:49.120613 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.120622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-20 10:53:49.120631 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.120639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-20 10:53:49.120648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-20 10:53:49.120657 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.120666 | orchestrator | 2025-09-20 10:53:49.120675 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-20 10:53:49.120684 | orchestrator | Saturday 20 September 2025 10:49:39 +0000 (0:00:01.200) 0:01:58.199 **** 2025-09-20 10:53:49.120693 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.120701 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.120710 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.120718 | orchestrator | 2025-09-20 10:53:49.120727 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-20 10:53:49.120735 | orchestrator | Saturday 20 September 2025 10:49:40 +0000 (0:00:01.246) 0:01:59.445 **** 2025-09-20 10:53:49.120744 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.120752 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.120761 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.120770 | orchestrator | 2025-09-20 10:53:49.120778 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-20 10:53:49.120787 | orchestrator | Saturday 20 September 2025 10:49:42 +0000 (0:00:01.906) 0:02:01.351 **** 2025-09-20 10:53:49.120796 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.120804 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.120813 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.120821 | orchestrator | 2025-09-20 10:53:49.120830 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-20 10:53:49.120842 | orchestrator | Saturday 20 September 2025 10:49:43 +0000 (0:00:00.424) 0:02:01.775 **** 2025-09-20 10:53:49.120851 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.120860 | orchestrator | 2025-09-20 10:53:49.120869 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-20 10:53:49.120877 | orchestrator | Saturday 20 September 2025 10:49:43 +0000 (0:00:00.726) 0:02:02.502 **** 2025-09-20 10:53:49.120895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 10:53:49.120912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.120931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 10:53:49.120947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.122169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 10:53:49.122295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.122316 | orchestrator | 2025-09-20 10:53:49.122331 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-20 10:53:49.122343 | orchestrator | Saturday 20 September 2025 10:49:47 +0000 (0:00:03.754) 0:02:06.257 **** 2025-09-20 10:53:49.122379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 10:53:49.122403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.122416 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.122434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 10:53:49.122457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.122477 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.122489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 10:53:49.122515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.122534 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.122546 | orchestrator | 2025-09-20 10:53:49.122557 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-20 10:53:49.122569 | orchestrator | Saturday 20 September 2025 10:49:50 +0000 (0:00:03.271) 0:02:09.528 **** 2025-09-20 10:53:49.122581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 10:53:49.122594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 10:53:49.122606 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.122618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 10:53:49.122629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 10:53:49.122641 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.122657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 10:53:49.122685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 10:53:49.122697 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.122708 | orchestrator | 2025-09-20 10:53:49.122720 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-20 10:53:49.122731 | orchestrator | Saturday 20 September 2025 10:49:53 +0000 (0:00:03.196) 0:02:12.725 **** 2025-09-20 10:53:49.122742 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.122753 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.122764 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.122775 | orchestrator | 2025-09-20 10:53:49.122786 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-20 10:53:49.122798 | orchestrator | Saturday 20 September 2025 10:49:55 +0000 (0:00:01.377) 0:02:14.102 **** 2025-09-20 10:53:49.122809 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.122819 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.122830 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.122841 | orchestrator | 2025-09-20 10:53:49.122852 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-20 10:53:49.122863 | orchestrator | Saturday 20 September 2025 10:49:57 +0000 (0:00:02.153) 0:02:16.256 **** 2025-09-20 10:53:49.122874 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.122885 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.122896 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.122907 | orchestrator | 2025-09-20 10:53:49.122918 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-20 10:53:49.122929 | orchestrator | Saturday 20 September 2025 10:49:58 +0000 (0:00:00.519) 0:02:16.775 **** 2025-09-20 10:53:49.122940 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.122951 | orchestrator | 2025-09-20 10:53:49.122962 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-20 10:53:49.122974 | orchestrator | Saturday 20 September 2025 10:49:58 +0000 (0:00:00.823) 0:02:17.599 **** 2025-09-20 10:53:49.122985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:53:49.122998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:53:49.123022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:53:49.123034 | orchestrator | 2025-09-20 10:53:49.123045 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-20 10:53:49.123057 | orchestrator | Saturday 20 September 2025 10:50:02 +0000 (0:00:03.231) 0:02:20.831 **** 2025-09-20 10:53:49.123076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:53:49.123088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:53:49.123100 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.123111 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.123122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:53:49.123134 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.123145 | orchestrator | 2025-09-20 10:53:49.123156 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-20 10:53:49.123167 | orchestrator | Saturday 20 September 2025 10:50:02 +0000 (0:00:00.634) 0:02:21.465 **** 2025-09-20 10:53:49.123195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-20 10:53:49.123207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-20 10:53:49.123226 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.123238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-20 10:53:49.123249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-20 10:53:49.123260 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.123271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-20 10:53:49.123282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-20 10:53:49.123293 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.123304 | orchestrator | 2025-09-20 10:53:49.123316 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-20 10:53:49.123327 | orchestrator | Saturday 20 September 2025 10:50:03 +0000 (0:00:00.699) 0:02:22.165 **** 2025-09-20 10:53:49.123342 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.123354 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.123365 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.123376 | orchestrator | 2025-09-20 10:53:49.123387 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-20 10:53:49.123398 | orchestrator | Saturday 20 September 2025 10:50:04 +0000 (0:00:01.319) 0:02:23.485 **** 2025-09-20 10:53:49.123409 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.123419 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.123430 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.123441 | orchestrator | 2025-09-20 10:53:49.123452 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-20 10:53:49.123464 | orchestrator | Saturday 20 September 2025 10:50:06 +0000 (0:00:02.088) 0:02:25.574 **** 2025-09-20 10:53:49.123475 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.123486 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.123516 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.123528 | orchestrator | 2025-09-20 10:53:49.123539 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-20 10:53:49.123550 | orchestrator | Saturday 20 September 2025 10:50:07 +0000 (0:00:00.529) 0:02:26.103 **** 2025-09-20 10:53:49.123561 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.123572 | orchestrator | 2025-09-20 10:53:49.123584 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-20 10:53:49.123595 | orchestrator | Saturday 20 September 2025 10:50:08 +0000 (0:00:00.910) 0:02:27.014 **** 2025-09-20 10:53:49.123607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:53:49.123647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:53:49.123661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:53:49.123680 | orchestrator | 2025-09-20 10:53:49.123692 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-20 10:53:49.123703 | orchestrator | Saturday 20 September 2025 10:50:11 +0000 (0:00:03.498) 0:02:30.513 **** 2025-09-20 10:53:49.123728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:53:49.123748 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.123761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:53:49.123773 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.123798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:53:49.123817 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.123828 | orchestrator | 2025-09-20 10:53:49.123840 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-20 10:53:49.123851 | orchestrator | Saturday 20 September 2025 10:50:12 +0000 (0:00:00.943) 0:02:31.457 **** 2025-09-20 10:53:49.123863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 10:53:49.123875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 10:53:49.123887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 10:53:49.123900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 10:53:49.123911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 10:53:49.123923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-20 10:53:49.123934 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.123950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 10:53:49.123962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 10:53:49.123980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 10:53:49.123992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-20 10:53:49.124009 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.124020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 10:53:49.124031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 10:53:49.124043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 10:53:49.124054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 10:53:49.124065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-20 10:53:49.124076 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.124088 | orchestrator | 2025-09-20 10:53:49.124099 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-20 10:53:49.124110 | orchestrator | Saturday 20 September 2025 10:50:13 +0000 (0:00:00.870) 0:02:32.327 **** 2025-09-20 10:53:49.124121 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.124132 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.124144 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.124155 | orchestrator | 2025-09-20 10:53:49.124166 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-20 10:53:49.124207 | orchestrator | Saturday 20 September 2025 10:50:14 +0000 (0:00:01.267) 0:02:33.594 **** 2025-09-20 10:53:49.124219 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.124230 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.124241 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.124252 | orchestrator | 2025-09-20 10:53:49.124263 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-20 10:53:49.124274 | orchestrator | Saturday 20 September 2025 10:50:16 +0000 (0:00:01.822) 0:02:35.417 **** 2025-09-20 10:53:49.124285 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.124296 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.124307 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.124330 | orchestrator | 2025-09-20 10:53:49.124351 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-20 10:53:49.124363 | orchestrator | Saturday 20 September 2025 10:50:16 +0000 (0:00:00.270) 0:02:35.687 **** 2025-09-20 10:53:49.124374 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.124385 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.124396 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.124407 | orchestrator | 2025-09-20 10:53:49.124418 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-20 10:53:49.124429 | orchestrator | Saturday 20 September 2025 10:50:17 +0000 (0:00:00.417) 0:02:36.105 **** 2025-09-20 10:53:49.124440 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.124451 | orchestrator | 2025-09-20 10:53:49.124467 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-20 10:53:49.124478 | orchestrator | Saturday 20 September 2025 10:50:18 +0000 (0:00:00.868) 0:02:36.973 **** 2025-09-20 10:53:49.124505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:53:49.124519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:53:49.124531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:53:49.124543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:53:49.124556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:53:49.124579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:53:49.124598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:53:49.124611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:53:49.124623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:53:49.124634 | orchestrator | 2025-09-20 10:53:49.124646 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-20 10:53:49.124657 | orchestrator | Saturday 20 September 2025 10:50:21 +0000 (0:00:03.205) 0:02:40.179 **** 2025-09-20 10:53:49.124668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:53:49.124691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:53:49.124710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:53:49.124721 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.124733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:53:49.124746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:53:49.124757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:53:49.124769 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.124789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:53:49.124861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:53:49.124874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:53:49.124886 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.124897 | orchestrator | 2025-09-20 10:53:49.124908 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-20 10:53:49.124920 | orchestrator | Saturday 20 September 2025 10:50:22 +0000 (0:00:00.709) 0:02:40.889 **** 2025-09-20 10:53:49.124931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 10:53:49.124943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 10:53:49.124955 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.124966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 10:53:49.124978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 10:53:49.124990 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.125001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 10:53:49.125013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 10:53:49.125030 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.125041 | orchestrator | 2025-09-20 10:53:49.125053 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-20 10:53:49.125064 | orchestrator | Saturday 20 September 2025 10:50:22 +0000 (0:00:00.837) 0:02:41.726 **** 2025-09-20 10:53:49.125074 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.125086 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.125097 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.125107 | orchestrator | 2025-09-20 10:53:49.125119 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-20 10:53:49.125129 | orchestrator | Saturday 20 September 2025 10:50:24 +0000 (0:00:01.220) 0:02:42.947 **** 2025-09-20 10:53:49.125140 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.125151 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.125162 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.125173 | orchestrator | 2025-09-20 10:53:49.125237 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-20 10:53:49.125253 | orchestrator | Saturday 20 September 2025 10:50:26 +0000 (0:00:01.886) 0:02:44.834 **** 2025-09-20 10:53:49.125264 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.125275 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.125286 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.125297 | orchestrator | 2025-09-20 10:53:49.125309 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-20 10:53:49.125320 | orchestrator | Saturday 20 September 2025 10:50:26 +0000 (0:00:00.406) 0:02:45.240 **** 2025-09-20 10:53:49.125331 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.125342 | orchestrator | 2025-09-20 10:53:49.125353 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-20 10:53:49.125364 | orchestrator | Saturday 20 September 2025 10:50:27 +0000 (0:00:00.873) 0:02:46.113 **** 2025-09-20 10:53:49.125383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:53:49.125396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:53:49.125428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:53:49.125475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125486 | orchestrator | 2025-09-20 10:53:49.125497 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-20 10:53:49.125509 | orchestrator | Saturday 20 September 2025 10:50:30 +0000 (0:00:03.600) 0:02:49.714 **** 2025-09-20 10:53:49.125520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:53:49.125543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125554 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.125570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:53:49.125588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125600 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.125612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:53:49.125623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125641 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.125652 | orchestrator | 2025-09-20 10:53:49.125663 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-20 10:53:49.125674 | orchestrator | Saturday 20 September 2025 10:50:31 +0000 (0:00:00.988) 0:02:50.703 **** 2025-09-20 10:53:49.125685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-20 10:53:49.125697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-20 10:53:49.125708 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.125719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-20 10:53:49.125730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-20 10:53:49.125742 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.125752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-20 10:53:49.125761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-20 10:53:49.125771 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.125781 | orchestrator | 2025-09-20 10:53:49.125791 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-20 10:53:49.125800 | orchestrator | Saturday 20 September 2025 10:50:32 +0000 (0:00:00.953) 0:02:51.656 **** 2025-09-20 10:53:49.125810 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.125820 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.125829 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.125839 | orchestrator | 2025-09-20 10:53:49.125849 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-20 10:53:49.125858 | orchestrator | Saturday 20 September 2025 10:50:34 +0000 (0:00:01.309) 0:02:52.965 **** 2025-09-20 10:53:49.125868 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.125878 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.125887 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.125897 | orchestrator | 2025-09-20 10:53:49.125906 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-20 10:53:49.125917 | orchestrator | Saturday 20 September 2025 10:50:36 +0000 (0:00:02.151) 0:02:55.116 **** 2025-09-20 10:53:49.125931 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.125941 | orchestrator | 2025-09-20 10:53:49.125951 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-20 10:53:49.125961 | orchestrator | Saturday 20 September 2025 10:50:37 +0000 (0:00:01.314) 0:02:56.431 **** 2025-09-20 10:53:49.125971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-20 10:53:49.125987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.125997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-20 10:53:49.126139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-20 10:53:49.126205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126261 | orchestrator | 2025-09-20 10:53:49.126271 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-20 10:53:49.126281 | orchestrator | Saturday 20 September 2025 10:50:41 +0000 (0:00:04.122) 0:03:00.553 **** 2025-09-20 10:53:49.126291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-20 10:53:49.126301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126331 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.126345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-20 10:53:49.126362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-20 10:53:49.126401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126421 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.126435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.126467 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.126477 | orchestrator | 2025-09-20 10:53:49.126487 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-20 10:53:49.126497 | orchestrator | Saturday 20 September 2025 10:50:42 +0000 (0:00:00.682) 0:03:01.236 **** 2025-09-20 10:53:49.126507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-20 10:53:49.126517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-20 10:53:49.126526 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.126536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-20 10:53:49.126546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-20 10:53:49.126556 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.126565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-20 10:53:49.126575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-20 10:53:49.126585 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.126594 | orchestrator | 2025-09-20 10:53:49.126604 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-20 10:53:49.126614 | orchestrator | Saturday 20 September 2025 10:50:43 +0000 (0:00:01.437) 0:03:02.674 **** 2025-09-20 10:53:49.126624 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.126633 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.126643 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.126653 | orchestrator | 2025-09-20 10:53:49.126662 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-20 10:53:49.126672 | orchestrator | Saturday 20 September 2025 10:50:45 +0000 (0:00:01.323) 0:03:03.998 **** 2025-09-20 10:53:49.126682 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.126691 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.126701 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.126710 | orchestrator | 2025-09-20 10:53:49.126720 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-20 10:53:49.126730 | orchestrator | Saturday 20 September 2025 10:50:47 +0000 (0:00:02.038) 0:03:06.036 **** 2025-09-20 10:53:49.126739 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.126749 | orchestrator | 2025-09-20 10:53:49.126758 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-20 10:53:49.126768 | orchestrator | Saturday 20 September 2025 10:50:48 +0000 (0:00:01.304) 0:03:07.340 **** 2025-09-20 10:53:49.126777 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 10:53:49.126793 | orchestrator | 2025-09-20 10:53:49.126802 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-20 10:53:49.126812 | orchestrator | Saturday 20 September 2025 10:50:51 +0000 (0:00:02.608) 0:03:09.948 **** 2025-09-20 10:53:49.126834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:53:49.126846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 10:53:49.126856 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.126867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:53:49.126888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 10:53:49.126898 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.126916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:53:49.126927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 10:53:49.126938 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.126947 | orchestrator | 2025-09-20 10:53:49.126957 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-20 10:53:49.126967 | orchestrator | Saturday 20 September 2025 10:50:53 +0000 (0:00:02.477) 0:03:12.425 **** 2025-09-20 10:53:49.126990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:53:49.127007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 10:53:49.127018 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:53:49.127045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 10:53:49.127055 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:53:49.127088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 10:53:49.127098 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127108 | orchestrator | 2025-09-20 10:53:49.127118 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-20 10:53:49.127127 | orchestrator | Saturday 20 September 2025 10:50:56 +0000 (0:00:02.397) 0:03:14.823 **** 2025-09-20 10:53:49.127137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 10:53:49.127153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 10:53:49.127164 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 10:53:49.127202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 10:53:49.127212 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 10:53:49.127239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 10:53:49.127249 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127259 | orchestrator | 2025-09-20 10:53:49.127269 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-20 10:53:49.127279 | orchestrator | Saturday 20 September 2025 10:50:58 +0000 (0:00:02.861) 0:03:17.685 **** 2025-09-20 10:53:49.127288 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.127298 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.127308 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.127323 | orchestrator | 2025-09-20 10:53:49.127333 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-20 10:53:49.127342 | orchestrator | Saturday 20 September 2025 10:51:00 +0000 (0:00:01.858) 0:03:19.543 **** 2025-09-20 10:53:49.127352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127362 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127371 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127381 | orchestrator | 2025-09-20 10:53:49.127391 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-20 10:53:49.127401 | orchestrator | Saturday 20 September 2025 10:51:02 +0000 (0:00:01.416) 0:03:20.960 **** 2025-09-20 10:53:49.127410 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127420 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127430 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127439 | orchestrator | 2025-09-20 10:53:49.127449 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-20 10:53:49.127459 | orchestrator | Saturday 20 September 2025 10:51:02 +0000 (0:00:00.306) 0:03:21.266 **** 2025-09-20 10:53:49.127468 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.127478 | orchestrator | 2025-09-20 10:53:49.127488 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-20 10:53:49.127498 | orchestrator | Saturday 20 September 2025 10:51:03 +0000 (0:00:01.306) 0:03:22.573 **** 2025-09-20 10:53:49.127508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-20 10:53:49.127523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-20 10:53:49.127540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-20 10:53:49.127551 | orchestrator | 2025-09-20 10:53:49.127561 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-20 10:53:49.127571 | orchestrator | Saturday 20 September 2025 10:51:05 +0000 (0:00:01.309) 0:03:23.883 **** 2025-09-20 10:53:49.127587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-20 10:53:49.127598 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-20 10:53:49.127618 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-20 10:53:49.127638 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127648 | orchestrator | 2025-09-20 10:53:49.127658 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-20 10:53:49.127667 | orchestrator | Saturday 20 September 2025 10:51:05 +0000 (0:00:00.392) 0:03:24.275 **** 2025-09-20 10:53:49.127682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-20 10:53:49.127693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-20 10:53:49.127703 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127713 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-20 10:53:49.127738 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127748 | orchestrator | 2025-09-20 10:53:49.127758 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-20 10:53:49.127774 | orchestrator | Saturday 20 September 2025 10:51:06 +0000 (0:00:00.643) 0:03:24.919 **** 2025-09-20 10:53:49.127784 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127794 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127803 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127813 | orchestrator | 2025-09-20 10:53:49.127822 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-20 10:53:49.127832 | orchestrator | Saturday 20 September 2025 10:51:06 +0000 (0:00:00.814) 0:03:25.734 **** 2025-09-20 10:53:49.127842 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127852 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127861 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127871 | orchestrator | 2025-09-20 10:53:49.127881 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-20 10:53:49.127891 | orchestrator | Saturday 20 September 2025 10:51:08 +0000 (0:00:01.288) 0:03:27.022 **** 2025-09-20 10:53:49.127900 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.127910 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.127920 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.127929 | orchestrator | 2025-09-20 10:53:49.127939 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-20 10:53:49.127949 | orchestrator | Saturday 20 September 2025 10:51:08 +0000 (0:00:00.345) 0:03:27.367 **** 2025-09-20 10:53:49.127959 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.127968 | orchestrator | 2025-09-20 10:53:49.127978 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-20 10:53:49.127988 | orchestrator | Saturday 20 September 2025 10:51:10 +0000 (0:00:01.573) 0:03:28.941 **** 2025-09-20 10:53:49.127998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 10:53:49.128009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 10:53:49.128024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 10:53:49.128126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 10:53:49.128136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.128301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.128315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.128436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.128446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.128457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.128467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 10:53:49.128494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 10:53:49.128535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.128770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.128833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.128841 | orchestrator | 2025-09-20 10:53:49.128849 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-20 10:53:49.128857 | orchestrator | Saturday 20 September 2025 10:51:14 +0000 (0:00:04.111) 0:03:33.052 **** 2025-09-20 10:53:49.128866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 10:53:49.128879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 10:53:49.128922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 10:53:49.128935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.128986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.128994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 10:53:49.129023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.129035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 10:53:49.129108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.129124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.129201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.129231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129239 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.129256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 10:53:49.129265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.129295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.129311 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.129320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.129373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 10:53:49.129406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.129420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 10:53:49.129430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 10:53:49.129444 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.129453 | orchestrator | 2025-09-20 10:53:49.129462 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-20 10:53:49.129471 | orchestrator | Saturday 20 September 2025 10:51:15 +0000 (0:00:01.448) 0:03:34.500 **** 2025-09-20 10:53:49.129480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-20 10:53:49.129489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-20 10:53:49.129498 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.129507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-20 10:53:49.129516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-20 10:53:49.129525 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.129533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-20 10:53:49.129542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-20 10:53:49.129551 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.129560 | orchestrator | 2025-09-20 10:53:49.129569 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-20 10:53:49.129578 | orchestrator | Saturday 20 September 2025 10:51:17 +0000 (0:00:02.068) 0:03:36.569 **** 2025-09-20 10:53:49.129587 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.129595 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.129604 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.129612 | orchestrator | 2025-09-20 10:53:49.129621 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-20 10:53:49.129630 | orchestrator | Saturday 20 September 2025 10:51:19 +0000 (0:00:01.278) 0:03:37.847 **** 2025-09-20 10:53:49.129638 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.129647 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.129656 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.129664 | orchestrator | 2025-09-20 10:53:49.129673 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-20 10:53:49.129682 | orchestrator | Saturday 20 September 2025 10:51:21 +0000 (0:00:02.009) 0:03:39.857 **** 2025-09-20 10:53:49.129694 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.129702 | orchestrator | 2025-09-20 10:53:49.129710 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-20 10:53:49.129718 | orchestrator | Saturday 20 September 2025 10:51:22 +0000 (0:00:01.228) 0:03:41.086 **** 2025-09-20 10:53:49.129732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.129746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.129755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.129763 | orchestrator | 2025-09-20 10:53:49.129771 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-20 10:53:49.129779 | orchestrator | Saturday 20 September 2025 10:51:25 +0000 (0:00:03.635) 0:03:44.722 **** 2025-09-20 10:53:49.129787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.129795 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.129814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.129827 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.129836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.129844 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.129852 | orchestrator | 2025-09-20 10:53:49.129860 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-20 10:53:49.129868 | orchestrator | Saturday 20 September 2025 10:51:26 +0000 (0:00:00.540) 0:03:45.262 **** 2025-09-20 10:53:49.129876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 10:53:49.129884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 10:53:49.129893 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.129901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 10:53:49.129909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 10:53:49.129917 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.129925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 10:53:49.129933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 10:53:49.129941 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.129949 | orchestrator | 2025-09-20 10:53:49.129956 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-20 10:53:49.129964 | orchestrator | Saturday 20 September 2025 10:51:27 +0000 (0:00:00.748) 0:03:46.011 **** 2025-09-20 10:53:49.129972 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.129980 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.129993 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.130001 | orchestrator | 2025-09-20 10:53:49.130009 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-20 10:53:49.130044 | orchestrator | Saturday 20 September 2025 10:51:28 +0000 (0:00:01.200) 0:03:47.211 **** 2025-09-20 10:53:49.130053 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.130061 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.130069 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.130077 | orchestrator | 2025-09-20 10:53:49.130085 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-20 10:53:49.130093 | orchestrator | Saturday 20 September 2025 10:51:30 +0000 (0:00:02.088) 0:03:49.299 **** 2025-09-20 10:53:49.130105 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.130113 | orchestrator | 2025-09-20 10:53:49.130121 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-20 10:53:49.130129 | orchestrator | Saturday 20 September 2025 10:51:32 +0000 (0:00:01.501) 0:03:50.802 **** 2025-09-20 10:53:49.130149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.130159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.130169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.130220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130267 | orchestrator | 2025-09-20 10:53:49.130275 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-20 10:53:49.130284 | orchestrator | Saturday 20 September 2025 10:51:35 +0000 (0:00:03.702) 0:03:54.504 **** 2025-09-20 10:53:49.130306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.130316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130333 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.130341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.130355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130375 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.130389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.130399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.130420 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.130428 | orchestrator | 2025-09-20 10:53:49.130436 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-20 10:53:49.130444 | orchestrator | Saturday 20 September 2025 10:51:36 +0000 (0:00:01.016) 0:03:55.521 **** 2025-09-20 10:53:49.130453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130486 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.130494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130535 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.130543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 10:53:49.130576 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.130584 | orchestrator | 2025-09-20 10:53:49.130592 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-20 10:53:49.130600 | orchestrator | Saturday 20 September 2025 10:51:37 +0000 (0:00:00.812) 0:03:56.333 **** 2025-09-20 10:53:49.130609 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.130617 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.130625 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.130637 | orchestrator | 2025-09-20 10:53:49.130646 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-20 10:53:49.130654 | orchestrator | Saturday 20 September 2025 10:51:38 +0000 (0:00:01.250) 0:03:57.584 **** 2025-09-20 10:53:49.130662 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.130670 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.130678 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.130686 | orchestrator | 2025-09-20 10:53:49.130694 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-20 10:53:49.130702 | orchestrator | Saturday 20 September 2025 10:51:40 +0000 (0:00:01.797) 0:03:59.381 **** 2025-09-20 10:53:49.130710 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.130718 | orchestrator | 2025-09-20 10:53:49.130726 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-20 10:53:49.130734 | orchestrator | Saturday 20 September 2025 10:51:42 +0000 (0:00:01.542) 0:04:00.924 **** 2025-09-20 10:53:49.130742 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-20 10:53:49.130750 | orchestrator | 2025-09-20 10:53:49.130758 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-20 10:53:49.130766 | orchestrator | Saturday 20 September 2025 10:51:42 +0000 (0:00:00.832) 0:04:01.756 **** 2025-09-20 10:53:49.130775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-20 10:53:49.130783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-20 10:53:49.130796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-20 10:53:49.130805 | orchestrator | 2025-09-20 10:53:49.130813 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-20 10:53:49.130821 | orchestrator | Saturday 20 September 2025 10:51:47 +0000 (0:00:04.577) 0:04:06.334 **** 2025-09-20 10:53:49.130834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.130843 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.130852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.130865 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.130873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.130881 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.130889 | orchestrator | 2025-09-20 10:53:49.130897 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-20 10:53:49.130905 | orchestrator | Saturday 20 September 2025 10:51:48 +0000 (0:00:01.059) 0:04:07.394 **** 2025-09-20 10:53:49.130914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 10:53:49.130922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 10:53:49.130931 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.130939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 10:53:49.130947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 10:53:49.130956 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.130964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 10:53:49.130972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 10:53:49.130980 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.130988 | orchestrator | 2025-09-20 10:53:49.130996 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-20 10:53:49.131004 | orchestrator | Saturday 20 September 2025 10:51:50 +0000 (0:00:01.587) 0:04:08.981 **** 2025-09-20 10:53:49.131012 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.131026 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.131034 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.131042 | orchestrator | 2025-09-20 10:53:49.131050 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-20 10:53:49.131058 | orchestrator | Saturday 20 September 2025 10:51:52 +0000 (0:00:02.423) 0:04:11.405 **** 2025-09-20 10:53:49.131066 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.131073 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.131081 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.131094 | orchestrator | 2025-09-20 10:53:49.131102 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-20 10:53:49.131110 | orchestrator | Saturday 20 September 2025 10:51:55 +0000 (0:00:02.790) 0:04:14.195 **** 2025-09-20 10:53:49.131123 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-20 10:53:49.131131 | orchestrator | 2025-09-20 10:53:49.131140 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-20 10:53:49.131148 | orchestrator | Saturday 20 September 2025 10:51:56 +0000 (0:00:01.149) 0:04:15.345 **** 2025-09-20 10:53:49.131156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.131164 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.131173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.131224 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.131233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.131241 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.131249 | orchestrator | 2025-09-20 10:53:49.131257 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-20 10:53:49.131266 | orchestrator | Saturday 20 September 2025 10:51:57 +0000 (0:00:01.102) 0:04:16.447 **** 2025-09-20 10:53:49.131274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.131282 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.131290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.131304 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.131316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 10:53:49.131324 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.131332 | orchestrator | 2025-09-20 10:53:49.131340 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-20 10:53:49.131348 | orchestrator | Saturday 20 September 2025 10:51:58 +0000 (0:00:01.115) 0:04:17.563 **** 2025-09-20 10:53:49.131357 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.131365 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.131373 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.131381 | orchestrator | 2025-09-20 10:53:49.131394 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-20 10:53:49.131402 | orchestrator | Saturday 20 September 2025 10:52:00 +0000 (0:00:01.530) 0:04:19.093 **** 2025-09-20 10:53:49.131410 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.131418 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.131426 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.131434 | orchestrator | 2025-09-20 10:53:49.131442 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-20 10:53:49.131450 | orchestrator | Saturday 20 September 2025 10:52:02 +0000 (0:00:02.002) 0:04:21.096 **** 2025-09-20 10:53:49.131459 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.131466 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.131474 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.131482 | orchestrator | 2025-09-20 10:53:49.131490 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-20 10:53:49.131498 | orchestrator | Saturday 20 September 2025 10:52:05 +0000 (0:00:02.985) 0:04:24.082 **** 2025-09-20 10:53:49.131507 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-20 10:53:49.131515 | orchestrator | 2025-09-20 10:53:49.131523 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-20 10:53:49.131531 | orchestrator | Saturday 20 September 2025 10:52:06 +0000 (0:00:00.888) 0:04:24.970 **** 2025-09-20 10:53:49.131539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 10:53:49.131547 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.131556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 10:53:49.131564 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.131572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 10:53:49.131585 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.131593 | orchestrator | 2025-09-20 10:53:49.131601 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-20 10:53:49.131610 | orchestrator | Saturday 20 September 2025 10:52:07 +0000 (0:00:01.362) 0:04:26.333 **** 2025-09-20 10:53:49.131618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 10:53:49.131630 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.131638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 10:53:49.131647 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.131660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 10:53:49.131669 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.131677 | orchestrator | 2025-09-20 10:53:49.131685 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-20 10:53:49.131693 | orchestrator | Saturday 20 September 2025 10:52:08 +0000 (0:00:01.167) 0:04:27.500 **** 2025-09-20 10:53:49.131701 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.131709 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.131717 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.131725 | orchestrator | 2025-09-20 10:53:49.131733 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-20 10:53:49.131741 | orchestrator | Saturday 20 September 2025 10:52:10 +0000 (0:00:01.432) 0:04:28.932 **** 2025-09-20 10:53:49.131747 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.131754 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.131761 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.131768 | orchestrator | 2025-09-20 10:53:49.131774 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-20 10:53:49.131781 | orchestrator | Saturday 20 September 2025 10:52:12 +0000 (0:00:02.310) 0:04:31.244 **** 2025-09-20 10:53:49.131788 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.131795 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.131801 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.131808 | orchestrator | 2025-09-20 10:53:49.131815 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-20 10:53:49.131826 | orchestrator | Saturday 20 September 2025 10:52:15 +0000 (0:00:03.217) 0:04:34.462 **** 2025-09-20 10:53:49.131832 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.131839 | orchestrator | 2025-09-20 10:53:49.131846 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-20 10:53:49.131853 | orchestrator | Saturday 20 September 2025 10:52:17 +0000 (0:00:01.603) 0:04:36.065 **** 2025-09-20 10:53:49.131860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.131868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:53:49.131878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.131890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.131898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.131905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.131917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:53:49.131924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.131934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.131946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.131953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.131965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:53:49.131972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.131979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.131985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.131992 | orchestrator | 2025-09-20 10:53:49.131999 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-20 10:53:49.132010 | orchestrator | Saturday 20 September 2025 10:52:20 +0000 (0:00:03.368) 0:04:39.434 **** 2025-09-20 10:53:49.132021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.132029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:53:49.132040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.132047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.132054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.132061 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.132071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.132083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:53:49.132090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.132102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.132109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.132116 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.132123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.132130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:53:49.132142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.132154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:53:49.132166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:53:49.132173 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.132193 | orchestrator | 2025-09-20 10:53:49.132200 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-20 10:53:49.132206 | orchestrator | Saturday 20 September 2025 10:52:21 +0000 (0:00:00.744) 0:04:40.179 **** 2025-09-20 10:53:49.132213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 10:53:49.132220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 10:53:49.132227 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.132234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 10:53:49.132241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 10:53:49.132248 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.132255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 10:53:49.132262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 10:53:49.132268 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.132275 | orchestrator | 2025-09-20 10:53:49.132282 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-20 10:53:49.132289 | orchestrator | Saturday 20 September 2025 10:52:22 +0000 (0:00:01.542) 0:04:41.722 **** 2025-09-20 10:53:49.132296 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.132302 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.132309 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.132316 | orchestrator | 2025-09-20 10:53:49.132323 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-20 10:53:49.132329 | orchestrator | Saturday 20 September 2025 10:52:24 +0000 (0:00:01.506) 0:04:43.228 **** 2025-09-20 10:53:49.132336 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.132343 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.132350 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.132356 | orchestrator | 2025-09-20 10:53:49.132363 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-20 10:53:49.132370 | orchestrator | Saturday 20 September 2025 10:52:26 +0000 (0:00:02.130) 0:04:45.359 **** 2025-09-20 10:53:49.132377 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.132388 | orchestrator | 2025-09-20 10:53:49.132398 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-20 10:53:49.132405 | orchestrator | Saturday 20 September 2025 10:52:27 +0000 (0:00:01.335) 0:04:46.694 **** 2025-09-20 10:53:49.132416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:53:49.132424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:53:49.132431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:53:49.132439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:53:49.132454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:53:49.132467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:53:49.132474 | orchestrator | 2025-09-20 10:53:49.132481 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-20 10:53:49.132488 | orchestrator | Saturday 20 September 2025 10:52:33 +0000 (0:00:05.700) 0:04:52.395 **** 2025-09-20 10:53:49.132495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:53:49.132502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:53:49.132514 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.132525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:53:49.132536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-das2025-09-20 10:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:49.132544 | orchestrator | hboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:53:49.132551 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.132558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:53:49.132566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:53:49.132577 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.132584 | orchestrator | 2025-09-20 10:53:49.132591 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-20 10:53:49.132598 | orchestrator | Saturday 20 September 2025 10:52:34 +0000 (0:00:00.762) 0:04:53.157 **** 2025-09-20 10:53:49.132605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-20 10:53:49.132615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 10:53:49.132622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 10:53:49.132629 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.132639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-20 10:53:49.132647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 10:53:49.132654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 10:53:49.132661 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.132667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-20 10:53:49.132674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 10:53:49.132681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 10:53:49.132688 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.132695 | orchestrator | 2025-09-20 10:53:49.132702 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-20 10:53:49.132708 | orchestrator | Saturday 20 September 2025 10:52:35 +0000 (0:00:00.924) 0:04:54.082 **** 2025-09-20 10:53:49.132715 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.132722 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.132739 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.132746 | orchestrator | 2025-09-20 10:53:49.132753 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-20 10:53:49.132759 | orchestrator | Saturday 20 September 2025 10:52:36 +0000 (0:00:00.926) 0:04:55.008 **** 2025-09-20 10:53:49.132766 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.132773 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.132779 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.132786 | orchestrator | 2025-09-20 10:53:49.132850 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-20 10:53:49.132859 | orchestrator | Saturday 20 September 2025 10:52:37 +0000 (0:00:01.343) 0:04:56.352 **** 2025-09-20 10:53:49.132871 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.132878 | orchestrator | 2025-09-20 10:53:49.132885 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-20 10:53:49.132892 | orchestrator | Saturday 20 September 2025 10:52:38 +0000 (0:00:01.333) 0:04:57.685 **** 2025-09-20 10:53:49.132899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 10:53:49.132909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 10:53:49.132917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.132924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.132931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.132938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 10:53:49.132967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 10:53:49.132975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.132982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.132992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.132999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 10:53:49.133006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 10:53:49.133014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 10:53:49.133056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 10:53:49.133063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 10:53:49.133105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 10:53:49.133113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 10:53:49.133150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 10:53:49.133157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133194 | orchestrator | 2025-09-20 10:53:49.133201 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-20 10:53:49.133207 | orchestrator | Saturday 20 September 2025 10:52:43 +0000 (0:00:04.171) 0:05:01.857 **** 2025-09-20 10:53:49.133214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 10:53:49.133225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 10:53:49.133236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 10:53:49.133268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 10:53:49.133280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133305 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 10:53:49.133322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 10:53:49.133329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 10:53:49.133367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 10:53:49.133374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 10:53:49.133381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 10:53:49.133392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133454 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 10:53:49.133473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 10:53:49.133480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 10:53:49.133498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 10:53:49.133505 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133511 | orchestrator | 2025-09-20 10:53:49.133518 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-20 10:53:49.133525 | orchestrator | Saturday 20 September 2025 10:52:44 +0000 (0:00:01.101) 0:05:02.958 **** 2025-09-20 10:53:49.133532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-20 10:53:49.133539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-20 10:53:49.133549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 10:53:49.133557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 10:53:49.133574 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-20 10:53:49.133588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-20 10:53:49.133598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 10:53:49.133605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 10:53:49.133612 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-20 10:53:49.133625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-20 10:53:49.133632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 10:53:49.133643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 10:53:49.133650 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133656 | orchestrator | 2025-09-20 10:53:49.133663 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-20 10:53:49.133670 | orchestrator | Saturday 20 September 2025 10:52:45 +0000 (0:00:00.944) 0:05:03.903 **** 2025-09-20 10:53:49.133677 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133683 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133690 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133697 | orchestrator | 2025-09-20 10:53:49.133703 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-20 10:53:49.133710 | orchestrator | Saturday 20 September 2025 10:52:45 +0000 (0:00:00.481) 0:05:04.384 **** 2025-09-20 10:53:49.133717 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133724 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133730 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133737 | orchestrator | 2025-09-20 10:53:49.133744 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-20 10:53:49.133750 | orchestrator | Saturday 20 September 2025 10:52:47 +0000 (0:00:01.463) 0:05:05.848 **** 2025-09-20 10:53:49.133757 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.133764 | orchestrator | 2025-09-20 10:53:49.133771 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-20 10:53:49.133777 | orchestrator | Saturday 20 September 2025 10:52:48 +0000 (0:00:01.721) 0:05:07.570 **** 2025-09-20 10:53:49.133793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:53:49.133801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:53:49.133808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 10:53:49.133815 | orchestrator | 2025-09-20 10:53:49.133825 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-20 10:53:49.133832 | orchestrator | Saturday 20 September 2025 10:52:51 +0000 (0:00:02.587) 0:05:10.157 **** 2025-09-20 10:53:49.133839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-20 10:53:49.133851 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-20 10:53:49.133869 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-20 10:53:49.133883 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133890 | orchestrator | 2025-09-20 10:53:49.133896 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-20 10:53:49.133903 | orchestrator | Saturday 20 September 2025 10:52:51 +0000 (0:00:00.417) 0:05:10.574 **** 2025-09-20 10:53:49.133910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-20 10:53:49.133917 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-20 10:53:49.133930 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-20 10:53:49.133943 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133950 | orchestrator | 2025-09-20 10:53:49.133957 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-20 10:53:49.133964 | orchestrator | Saturday 20 September 2025 10:52:52 +0000 (0:00:00.932) 0:05:11.507 **** 2025-09-20 10:53:49.133974 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.133981 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.133988 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.133994 | orchestrator | 2025-09-20 10:53:49.134001 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-20 10:53:49.134030 | orchestrator | Saturday 20 September 2025 10:52:53 +0000 (0:00:00.382) 0:05:11.890 **** 2025-09-20 10:53:49.134038 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134045 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134053 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134060 | orchestrator | 2025-09-20 10:53:49.134067 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-20 10:53:49.134074 | orchestrator | Saturday 20 September 2025 10:52:54 +0000 (0:00:01.168) 0:05:13.058 **** 2025-09-20 10:53:49.134081 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:53:49.134087 | orchestrator | 2025-09-20 10:53:49.134094 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-20 10:53:49.134101 | orchestrator | Saturday 20 September 2025 10:52:55 +0000 (0:00:01.583) 0:05:14.642 **** 2025-09-20 10:53:49.134111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.134119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.134126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.134137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.134150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.134160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-20 10:53:49.134167 | orchestrator | 2025-09-20 10:53:49.134209 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-20 10:53:49.134217 | orchestrator | Saturday 20 September 2025 10:53:01 +0000 (0:00:05.617) 0:05:20.259 **** 2025-09-20 10:53:49.134224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.134235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.134247 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.134265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.134272 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.134287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-20 10:53:49.134298 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134304 | orchestrator | 2025-09-20 10:53:49.134311 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-20 10:53:49.134321 | orchestrator | Saturday 20 September 2025 10:53:02 +0000 (0:00:00.622) 0:05:20.882 **** 2025-09-20 10:53:49.134328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134356 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134394 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 10:53:49.134428 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134434 | orchestrator | 2025-09-20 10:53:49.134441 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-20 10:53:49.134451 | orchestrator | Saturday 20 September 2025 10:53:03 +0000 (0:00:01.291) 0:05:22.173 **** 2025-09-20 10:53:49.134458 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.134464 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.134470 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.134476 | orchestrator | 2025-09-20 10:53:49.134483 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-20 10:53:49.134489 | orchestrator | Saturday 20 September 2025 10:53:04 +0000 (0:00:01.346) 0:05:23.519 **** 2025-09-20 10:53:49.134495 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.134501 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.134508 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.134514 | orchestrator | 2025-09-20 10:53:49.134520 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-20 10:53:49.134526 | orchestrator | Saturday 20 September 2025 10:53:06 +0000 (0:00:02.164) 0:05:25.684 **** 2025-09-20 10:53:49.134533 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134539 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134545 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134551 | orchestrator | 2025-09-20 10:53:49.134557 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-20 10:53:49.134564 | orchestrator | Saturday 20 September 2025 10:53:07 +0000 (0:00:00.349) 0:05:26.034 **** 2025-09-20 10:53:49.134570 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134576 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134582 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134589 | orchestrator | 2025-09-20 10:53:49.134595 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-20 10:53:49.134604 | orchestrator | Saturday 20 September 2025 10:53:07 +0000 (0:00:00.316) 0:05:26.350 **** 2025-09-20 10:53:49.134610 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134617 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134623 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134629 | orchestrator | 2025-09-20 10:53:49.134636 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-20 10:53:49.134642 | orchestrator | Saturday 20 September 2025 10:53:08 +0000 (0:00:00.510) 0:05:26.860 **** 2025-09-20 10:53:49.134648 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134654 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134660 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134667 | orchestrator | 2025-09-20 10:53:49.134673 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-20 10:53:49.134679 | orchestrator | Saturday 20 September 2025 10:53:08 +0000 (0:00:00.305) 0:05:27.166 **** 2025-09-20 10:53:49.134686 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134692 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134698 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134704 | orchestrator | 2025-09-20 10:53:49.134711 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-20 10:53:49.134717 | orchestrator | Saturday 20 September 2025 10:53:08 +0000 (0:00:00.304) 0:05:27.471 **** 2025-09-20 10:53:49.134723 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.134729 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.134735 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.134742 | orchestrator | 2025-09-20 10:53:49.134748 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-20 10:53:49.134754 | orchestrator | Saturday 20 September 2025 10:53:09 +0000 (0:00:00.679) 0:05:28.151 **** 2025-09-20 10:53:49.134761 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.134767 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.134773 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.134779 | orchestrator | 2025-09-20 10:53:49.134786 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-20 10:53:49.134796 | orchestrator | Saturday 20 September 2025 10:53:10 +0000 (0:00:00.657) 0:05:28.808 **** 2025-09-20 10:53:49.134802 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.134808 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.134814 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.134820 | orchestrator | 2025-09-20 10:53:49.134832 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-20 10:53:49.134838 | orchestrator | Saturday 20 September 2025 10:53:10 +0000 (0:00:00.298) 0:05:29.107 **** 2025-09-20 10:53:49.134845 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.134851 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.134857 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.134863 | orchestrator | 2025-09-20 10:53:49.134870 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-20 10:53:49.134876 | orchestrator | Saturday 20 September 2025 10:53:11 +0000 (0:00:00.839) 0:05:29.946 **** 2025-09-20 10:53:49.134882 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.134888 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.134895 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.134901 | orchestrator | 2025-09-20 10:53:49.134907 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-20 10:53:49.134914 | orchestrator | Saturday 20 September 2025 10:53:12 +0000 (0:00:01.075) 0:05:31.021 **** 2025-09-20 10:53:49.134920 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.134926 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.134932 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.134938 | orchestrator | 2025-09-20 10:53:49.134944 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-20 10:53:49.134951 | orchestrator | Saturday 20 September 2025 10:53:13 +0000 (0:00:00.839) 0:05:31.861 **** 2025-09-20 10:53:49.134957 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.134963 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.134969 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.134976 | orchestrator | 2025-09-20 10:53:49.134982 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-20 10:53:49.134988 | orchestrator | Saturday 20 September 2025 10:53:21 +0000 (0:00:08.213) 0:05:40.074 **** 2025-09-20 10:53:49.134994 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.135001 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.135007 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.135013 | orchestrator | 2025-09-20 10:53:49.135019 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-20 10:53:49.135025 | orchestrator | Saturday 20 September 2025 10:53:22 +0000 (0:00:00.742) 0:05:40.817 **** 2025-09-20 10:53:49.135032 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.135038 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.135044 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.135050 | orchestrator | 2025-09-20 10:53:49.135057 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-20 10:53:49.135063 | orchestrator | Saturday 20 September 2025 10:53:30 +0000 (0:00:08.415) 0:05:49.233 **** 2025-09-20 10:53:49.135069 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.135075 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.135082 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.135088 | orchestrator | 2025-09-20 10:53:49.135095 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-20 10:53:49.135105 | orchestrator | Saturday 20 September 2025 10:53:34 +0000 (0:00:03.996) 0:05:53.230 **** 2025-09-20 10:53:49.135114 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:53:49.135124 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:53:49.135135 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:53:49.135144 | orchestrator | 2025-09-20 10:53:49.135154 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-20 10:53:49.135164 | orchestrator | Saturday 20 September 2025 10:53:43 +0000 (0:00:08.872) 0:06:02.102 **** 2025-09-20 10:53:49.135185 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.135203 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.135213 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.135223 | orchestrator | 2025-09-20 10:53:49.135234 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-20 10:53:49.135244 | orchestrator | Saturday 20 September 2025 10:53:43 +0000 (0:00:00.310) 0:06:02.412 **** 2025-09-20 10:53:49.135253 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.135268 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.135279 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.135286 | orchestrator | 2025-09-20 10:53:49.135292 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-20 10:53:49.135299 | orchestrator | Saturday 20 September 2025 10:53:43 +0000 (0:00:00.305) 0:06:02.718 **** 2025-09-20 10:53:49.135305 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.135311 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.135317 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.135324 | orchestrator | 2025-09-20 10:53:49.135330 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-20 10:53:49.135336 | orchestrator | Saturday 20 September 2025 10:53:44 +0000 (0:00:00.532) 0:06:03.251 **** 2025-09-20 10:53:49.135342 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.135349 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.135355 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.135361 | orchestrator | 2025-09-20 10:53:49.135367 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-20 10:53:49.135374 | orchestrator | Saturday 20 September 2025 10:53:44 +0000 (0:00:00.283) 0:06:03.535 **** 2025-09-20 10:53:49.135380 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.135386 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.135392 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.135398 | orchestrator | 2025-09-20 10:53:49.135405 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-20 10:53:49.135411 | orchestrator | Saturday 20 September 2025 10:53:45 +0000 (0:00:00.332) 0:06:03.868 **** 2025-09-20 10:53:49.135417 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:53:49.135423 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:53:49.135430 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:53:49.135436 | orchestrator | 2025-09-20 10:53:49.135442 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-20 10:53:49.135449 | orchestrator | Saturday 20 September 2025 10:53:45 +0000 (0:00:00.352) 0:06:04.220 **** 2025-09-20 10:53:49.135455 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.135461 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.135467 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.135473 | orchestrator | 2025-09-20 10:53:49.135480 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-20 10:53:49.135490 | orchestrator | Saturday 20 September 2025 10:53:46 +0000 (0:00:01.370) 0:06:05.591 **** 2025-09-20 10:53:49.135497 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:53:49.135503 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:53:49.135509 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:53:49.135515 | orchestrator | 2025-09-20 10:53:49.135522 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:53:49.135528 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-20 10:53:49.135535 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-20 10:53:49.135541 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-20 10:53:49.135548 | orchestrator | 2025-09-20 10:53:49.135554 | orchestrator | 2025-09-20 10:53:49.135560 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:53:49.135572 | orchestrator | Saturday 20 September 2025 10:53:47 +0000 (0:00:00.830) 0:06:06.422 **** 2025-09-20 10:53:49.135578 | orchestrator | =============================================================================== 2025-09-20 10:53:49.135584 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.87s 2025-09-20 10:53:49.135591 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.42s 2025-09-20 10:53:49.135597 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.21s 2025-09-20 10:53:49.135603 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.70s 2025-09-20 10:53:49.135609 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.62s 2025-09-20 10:53:49.135616 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.25s 2025-09-20 10:53:49.135622 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.03s 2025-09-20 10:53:49.135628 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.58s 2025-09-20 10:53:49.135634 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.35s 2025-09-20 10:53:49.135640 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.17s 2025-09-20 10:53:49.135647 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.12s 2025-09-20 10:53:49.135653 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.11s 2025-09-20 10:53:49.135659 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 4.01s 2025-09-20 10:53:49.135665 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.00s 2025-09-20 10:53:49.135671 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 3.82s 2025-09-20 10:53:49.135677 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.81s 2025-09-20 10:53:49.135684 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.75s 2025-09-20 10:53:49.135690 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.70s 2025-09-20 10:53:49.135696 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.64s 2025-09-20 10:53:49.135702 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.60s 2025-09-20 10:53:52.135247 | orchestrator | 2025-09-20 10:53:52 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:52.135527 | orchestrator | 2025-09-20 10:53:52 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:53:52.135982 | orchestrator | 2025-09-20 10:53:52 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:53:52.136012 | orchestrator | 2025-09-20 10:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:55.166912 | orchestrator | 2025-09-20 10:53:55 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:55.167923 | orchestrator | 2025-09-20 10:53:55 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:53:55.169269 | orchestrator | 2025-09-20 10:53:55 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:53:55.169559 | orchestrator | 2025-09-20 10:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:53:58.198726 | orchestrator | 2025-09-20 10:53:58 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:53:58.199466 | orchestrator | 2025-09-20 10:53:58 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:53:58.201959 | orchestrator | 2025-09-20 10:53:58 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:53:58.202009 | orchestrator | 2025-09-20 10:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:01.235973 | orchestrator | 2025-09-20 10:54:01 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:01.237582 | orchestrator | 2025-09-20 10:54:01 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:01.239347 | orchestrator | 2025-09-20 10:54:01 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:01.239378 | orchestrator | 2025-09-20 10:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:04.270475 | orchestrator | 2025-09-20 10:54:04 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:04.270585 | orchestrator | 2025-09-20 10:54:04 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:04.270602 | orchestrator | 2025-09-20 10:54:04 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:04.270615 | orchestrator | 2025-09-20 10:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:07.311374 | orchestrator | 2025-09-20 10:54:07 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:07.311491 | orchestrator | 2025-09-20 10:54:07 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:07.312089 | orchestrator | 2025-09-20 10:54:07 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:07.312115 | orchestrator | 2025-09-20 10:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:10.348002 | orchestrator | 2025-09-20 10:54:10 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:10.349129 | orchestrator | 2025-09-20 10:54:10 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:10.350705 | orchestrator | 2025-09-20 10:54:10 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:10.350824 | orchestrator | 2025-09-20 10:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:13.384894 | orchestrator | 2025-09-20 10:54:13 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:13.385016 | orchestrator | 2025-09-20 10:54:13 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:13.386314 | orchestrator | 2025-09-20 10:54:13 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:13.386379 | orchestrator | 2025-09-20 10:54:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:16.429406 | orchestrator | 2025-09-20 10:54:16 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:16.430403 | orchestrator | 2025-09-20 10:54:16 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:16.433635 | orchestrator | 2025-09-20 10:54:16 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:16.433660 | orchestrator | 2025-09-20 10:54:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:19.487437 | orchestrator | 2025-09-20 10:54:19 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:19.491250 | orchestrator | 2025-09-20 10:54:19 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:19.493339 | orchestrator | 2025-09-20 10:54:19 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:19.493381 | orchestrator | 2025-09-20 10:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:22.529355 | orchestrator | 2025-09-20 10:54:22 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:22.531052 | orchestrator | 2025-09-20 10:54:22 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:22.531395 | orchestrator | 2025-09-20 10:54:22 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:22.531636 | orchestrator | 2025-09-20 10:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:25.569546 | orchestrator | 2025-09-20 10:54:25 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:25.569954 | orchestrator | 2025-09-20 10:54:25 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:25.572371 | orchestrator | 2025-09-20 10:54:25 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:25.572397 | orchestrator | 2025-09-20 10:54:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:28.629248 | orchestrator | 2025-09-20 10:54:28 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:28.632128 | orchestrator | 2025-09-20 10:54:28 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:28.634788 | orchestrator | 2025-09-20 10:54:28 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:28.635004 | orchestrator | 2025-09-20 10:54:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:31.674320 | orchestrator | 2025-09-20 10:54:31 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:31.674694 | orchestrator | 2025-09-20 10:54:31 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:31.675443 | orchestrator | 2025-09-20 10:54:31 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:31.675648 | orchestrator | 2025-09-20 10:54:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:34.715287 | orchestrator | 2025-09-20 10:54:34 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:34.716826 | orchestrator | 2025-09-20 10:54:34 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:34.718968 | orchestrator | 2025-09-20 10:54:34 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:34.719012 | orchestrator | 2025-09-20 10:54:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:37.768011 | orchestrator | 2025-09-20 10:54:37 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:37.770152 | orchestrator | 2025-09-20 10:54:37 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:37.771239 | orchestrator | 2025-09-20 10:54:37 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:37.771269 | orchestrator | 2025-09-20 10:54:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:40.815262 | orchestrator | 2025-09-20 10:54:40 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:40.815442 | orchestrator | 2025-09-20 10:54:40 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:40.816525 | orchestrator | 2025-09-20 10:54:40 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:40.816571 | orchestrator | 2025-09-20 10:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:43.866403 | orchestrator | 2025-09-20 10:54:43 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:43.866557 | orchestrator | 2025-09-20 10:54:43 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:43.866573 | orchestrator | 2025-09-20 10:54:43 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:43.866585 | orchestrator | 2025-09-20 10:54:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:46.912916 | orchestrator | 2025-09-20 10:54:46 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:46.915521 | orchestrator | 2025-09-20 10:54:46 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:46.919488 | orchestrator | 2025-09-20 10:54:46 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:46.919545 | orchestrator | 2025-09-20 10:54:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:49.963007 | orchestrator | 2025-09-20 10:54:49 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:49.964286 | orchestrator | 2025-09-20 10:54:49 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:49.965949 | orchestrator | 2025-09-20 10:54:49 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:49.965988 | orchestrator | 2025-09-20 10:54:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:53.020131 | orchestrator | 2025-09-20 10:54:53 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:53.020682 | orchestrator | 2025-09-20 10:54:53 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:53.023275 | orchestrator | 2025-09-20 10:54:53 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:53.023307 | orchestrator | 2025-09-20 10:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:56.055325 | orchestrator | 2025-09-20 10:54:56 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:56.056716 | orchestrator | 2025-09-20 10:54:56 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:56.058834 | orchestrator | 2025-09-20 10:54:56 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:56.058868 | orchestrator | 2025-09-20 10:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:54:59.099732 | orchestrator | 2025-09-20 10:54:59 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:54:59.101539 | orchestrator | 2025-09-20 10:54:59 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:54:59.103022 | orchestrator | 2025-09-20 10:54:59 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:54:59.103215 | orchestrator | 2025-09-20 10:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:02.151043 | orchestrator | 2025-09-20 10:55:02 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:02.153002 | orchestrator | 2025-09-20 10:55:02 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:02.154506 | orchestrator | 2025-09-20 10:55:02 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:02.154524 | orchestrator | 2025-09-20 10:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:05.195378 | orchestrator | 2025-09-20 10:55:05 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:05.199484 | orchestrator | 2025-09-20 10:55:05 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:05.202417 | orchestrator | 2025-09-20 10:55:05 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:05.203114 | orchestrator | 2025-09-20 10:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:08.250880 | orchestrator | 2025-09-20 10:55:08 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:08.252194 | orchestrator | 2025-09-20 10:55:08 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:08.254690 | orchestrator | 2025-09-20 10:55:08 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:08.255185 | orchestrator | 2025-09-20 10:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:11.306792 | orchestrator | 2025-09-20 10:55:11 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:11.307569 | orchestrator | 2025-09-20 10:55:11 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:11.308884 | orchestrator | 2025-09-20 10:55:11 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:11.308912 | orchestrator | 2025-09-20 10:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:14.357734 | orchestrator | 2025-09-20 10:55:14 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:14.358387 | orchestrator | 2025-09-20 10:55:14 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:14.359735 | orchestrator | 2025-09-20 10:55:14 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:14.359772 | orchestrator | 2025-09-20 10:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:17.418387 | orchestrator | 2025-09-20 10:55:17 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:17.419015 | orchestrator | 2025-09-20 10:55:17 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:17.420898 | orchestrator | 2025-09-20 10:55:17 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:17.420927 | orchestrator | 2025-09-20 10:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:20.457519 | orchestrator | 2025-09-20 10:55:20 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:20.459607 | orchestrator | 2025-09-20 10:55:20 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:20.461077 | orchestrator | 2025-09-20 10:55:20 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:20.461202 | orchestrator | 2025-09-20 10:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:23.510612 | orchestrator | 2025-09-20 10:55:23 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:23.511560 | orchestrator | 2025-09-20 10:55:23 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:23.512671 | orchestrator | 2025-09-20 10:55:23 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:23.512717 | orchestrator | 2025-09-20 10:55:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:26.554763 | orchestrator | 2025-09-20 10:55:26 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:26.556224 | orchestrator | 2025-09-20 10:55:26 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:26.558451 | orchestrator | 2025-09-20 10:55:26 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:26.558564 | orchestrator | 2025-09-20 10:55:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:29.597694 | orchestrator | 2025-09-20 10:55:29 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:29.599820 | orchestrator | 2025-09-20 10:55:29 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:29.603542 | orchestrator | 2025-09-20 10:55:29 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:29.603572 | orchestrator | 2025-09-20 10:55:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:32.662061 | orchestrator | 2025-09-20 10:55:32 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:32.663521 | orchestrator | 2025-09-20 10:55:32 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:32.665970 | orchestrator | 2025-09-20 10:55:32 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:32.666449 | orchestrator | 2025-09-20 10:55:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:35.712471 | orchestrator | 2025-09-20 10:55:35 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:35.713387 | orchestrator | 2025-09-20 10:55:35 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:35.714495 | orchestrator | 2025-09-20 10:55:35 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:35.714523 | orchestrator | 2025-09-20 10:55:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:38.768893 | orchestrator | 2025-09-20 10:55:38 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:38.770267 | orchestrator | 2025-09-20 10:55:38 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:38.772277 | orchestrator | 2025-09-20 10:55:38 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:38.772306 | orchestrator | 2025-09-20 10:55:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:41.806262 | orchestrator | 2025-09-20 10:55:41 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:41.807608 | orchestrator | 2025-09-20 10:55:41 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:41.809279 | orchestrator | 2025-09-20 10:55:41 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:41.809347 | orchestrator | 2025-09-20 10:55:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:44.868252 | orchestrator | 2025-09-20 10:55:44 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:44.869494 | orchestrator | 2025-09-20 10:55:44 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:44.871857 | orchestrator | 2025-09-20 10:55:44 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:44.872332 | orchestrator | 2025-09-20 10:55:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:47.936187 | orchestrator | 2025-09-20 10:55:47 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state STARTED 2025-09-20 10:55:47.937489 | orchestrator | 2025-09-20 10:55:47 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:47.939917 | orchestrator | 2025-09-20 10:55:47 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:47.940266 | orchestrator | 2025-09-20 10:55:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:50.996762 | orchestrator | 2025-09-20 10:55:50 | INFO  | Task fbb1dbda-0b7f-43f0-a5c0-c535c7dcdca1 is in state SUCCESS 2025-09-20 10:55:50.999088 | orchestrator | 2025-09-20 10:55:50.999176 | orchestrator | 2025-09-20 10:55:50.999206 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-20 10:55:50.999220 | orchestrator | 2025-09-20 10:55:50.999232 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-20 10:55:50.999243 | orchestrator | Saturday 20 September 2025 10:45:17 +0000 (0:00:00.812) 0:00:00.812 **** 2025-09-20 10:55:50.999256 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:50.999268 | orchestrator | 2025-09-20 10:55:50.999279 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-20 10:55:50.999337 | orchestrator | Saturday 20 September 2025 10:45:18 +0000 (0:00:01.250) 0:00:02.063 **** 2025-09-20 10:55:50.999351 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:50.999363 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:50.999603 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:50.999623 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:50.999672 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:50.999685 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:50.999697 | orchestrator | 2025-09-20 10:55:50.999709 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-20 10:55:50.999721 | orchestrator | Saturday 20 September 2025 10:45:20 +0000 (0:00:01.905) 0:00:03.968 **** 2025-09-20 10:55:50.999757 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:50.999770 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:50.999813 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:50.999825 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:50.999837 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:50.999849 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:50.999861 | orchestrator | 2025-09-20 10:55:50.999873 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-20 10:55:50.999885 | orchestrator | Saturday 20 September 2025 10:45:21 +0000 (0:00:00.614) 0:00:04.582 **** 2025-09-20 10:55:50.999898 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:50.999910 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:50.999922 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:50.999934 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:50.999945 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:50.999957 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:50.999985 | orchestrator | 2025-09-20 10:55:50.999997 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-20 10:55:51.000009 | orchestrator | Saturday 20 September 2025 10:45:22 +0000 (0:00:01.131) 0:00:05.714 **** 2025-09-20 10:55:51.000022 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.000033 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.000178 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.000191 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.000201 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.000263 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.000276 | orchestrator | 2025-09-20 10:55:51.000287 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-20 10:55:51.000298 | orchestrator | Saturday 20 September 2025 10:45:23 +0000 (0:00:00.814) 0:00:06.529 **** 2025-09-20 10:55:51.000309 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.000319 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.000353 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.000366 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.000437 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.000458 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.000477 | orchestrator | 2025-09-20 10:55:51.000521 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-20 10:55:51.000534 | orchestrator | Saturday 20 September 2025 10:45:23 +0000 (0:00:00.660) 0:00:07.190 **** 2025-09-20 10:55:51.000544 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.000555 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.000569 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.000587 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.000605 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.000622 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.000640 | orchestrator | 2025-09-20 10:55:51.000658 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-20 10:55:51.000677 | orchestrator | Saturday 20 September 2025 10:45:24 +0000 (0:00:01.060) 0:00:08.250 **** 2025-09-20 10:55:51.000695 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.000716 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.000733 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.000751 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.000767 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.000784 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.000801 | orchestrator | 2025-09-20 10:55:51.000818 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-20 10:55:51.000836 | orchestrator | Saturday 20 September 2025 10:45:25 +0000 (0:00:00.955) 0:00:09.206 **** 2025-09-20 10:55:51.000854 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.000873 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.000890 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.001425 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.001452 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.001470 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.001489 | orchestrator | 2025-09-20 10:55:51.001509 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-20 10:55:51.001528 | orchestrator | Saturday 20 September 2025 10:45:26 +0000 (0:00:00.973) 0:00:10.180 **** 2025-09-20 10:55:51.001547 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:55:51.001566 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:55:51.001586 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:55:51.001792 | orchestrator | 2025-09-20 10:55:51.001889 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-20 10:55:51.001905 | orchestrator | Saturday 20 September 2025 10:45:27 +0000 (0:00:00.752) 0:00:10.933 **** 2025-09-20 10:55:51.001916 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.001927 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.001938 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.001987 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.001998 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.002009 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.002207 | orchestrator | 2025-09-20 10:55:51.002253 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-20 10:55:51.002265 | orchestrator | Saturday 20 September 2025 10:45:29 +0000 (0:00:01.619) 0:00:12.552 **** 2025-09-20 10:55:51.002275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:55:51.002285 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:55:51.002294 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:55:51.002304 | orchestrator | 2025-09-20 10:55:51.002314 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-20 10:55:51.002324 | orchestrator | Saturday 20 September 2025 10:45:32 +0000 (0:00:02.840) 0:00:15.393 **** 2025-09-20 10:55:51.002334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 10:55:51.002344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 10:55:51.002369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 10:55:51.002379 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.002389 | orchestrator | 2025-09-20 10:55:51.002399 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-20 10:55:51.002409 | orchestrator | Saturday 20 September 2025 10:45:32 +0000 (0:00:00.678) 0:00:16.072 **** 2025-09-20 10:55:51.002422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002475 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.002491 | orchestrator | 2025-09-20 10:55:51.002507 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-20 10:55:51.002525 | orchestrator | Saturday 20 September 2025 10:45:33 +0000 (0:00:01.141) 0:00:17.213 **** 2025-09-20 10:55:51.002544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002596 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.002613 | orchestrator | 2025-09-20 10:55:51.002629 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-20 10:55:51.002736 | orchestrator | Saturday 20 September 2025 10:45:34 +0000 (0:00:00.338) 0:00:17.551 **** 2025-09-20 10:55:51.002921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-20 10:45:29.898498', 'end': '2025-09-20 10:45:30.170259', 'delta': '0:00:00.271761', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-20 10:45:30.720191', 'end': '2025-09-20 10:45:30.993851', 'delta': '0:00:00.273660', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.002986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-20 10:45:31.662535', 'end': '2025-09-20 10:45:31.911813', 'delta': '0:00:00.249278', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.003005 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.003023 | orchestrator | 2025-09-20 10:55:51.003039 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-20 10:55:51.003056 | orchestrator | Saturday 20 September 2025 10:45:34 +0000 (0:00:00.596) 0:00:18.147 **** 2025-09-20 10:55:51.003074 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.003090 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.003132 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.003150 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.003165 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.003180 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.003196 | orchestrator | 2025-09-20 10:55:51.003212 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-20 10:55:51.003228 | orchestrator | Saturday 20 September 2025 10:45:37 +0000 (0:00:02.876) 0:00:21.024 **** 2025-09-20 10:55:51.003244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.003261 | orchestrator | 2025-09-20 10:55:51.003277 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-20 10:55:51.003294 | orchestrator | Saturday 20 September 2025 10:45:38 +0000 (0:00:00.740) 0:00:21.765 **** 2025-09-20 10:55:51.003310 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.003327 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.003344 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.003359 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.003374 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.003389 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.003405 | orchestrator | 2025-09-20 10:55:51.003420 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-20 10:55:51.003435 | orchestrator | Saturday 20 September 2025 10:45:40 +0000 (0:00:02.201) 0:00:23.966 **** 2025-09-20 10:55:51.003451 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.003467 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.003480 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.003494 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.003509 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.003672 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.003696 | orchestrator | 2025-09-20 10:55:51.003873 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 10:55:51.003887 | orchestrator | Saturday 20 September 2025 10:45:42 +0000 (0:00:02.044) 0:00:26.011 **** 2025-09-20 10:55:51.003896 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.003921 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.003931 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.003940 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.003950 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.003960 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.003970 | orchestrator | 2025-09-20 10:55:51.003979 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-20 10:55:51.003989 | orchestrator | Saturday 20 September 2025 10:45:43 +0000 (0:00:01.233) 0:00:27.244 **** 2025-09-20 10:55:51.003999 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004008 | orchestrator | 2025-09-20 10:55:51.004018 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-20 10:55:51.004028 | orchestrator | Saturday 20 September 2025 10:45:44 +0000 (0:00:00.122) 0:00:27.367 **** 2025-09-20 10:55:51.004037 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004047 | orchestrator | 2025-09-20 10:55:51.004057 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 10:55:51.004066 | orchestrator | Saturday 20 September 2025 10:45:44 +0000 (0:00:00.210) 0:00:27.577 **** 2025-09-20 10:55:51.004076 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004086 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004095 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004133 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004151 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004168 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004185 | orchestrator | 2025-09-20 10:55:51.004220 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-20 10:55:51.004231 | orchestrator | Saturday 20 September 2025 10:45:45 +0000 (0:00:00.815) 0:00:28.392 **** 2025-09-20 10:55:51.004240 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004250 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004260 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004270 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004280 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004289 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004299 | orchestrator | 2025-09-20 10:55:51.004309 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-20 10:55:51.004319 | orchestrator | Saturday 20 September 2025 10:45:46 +0000 (0:00:01.651) 0:00:30.044 **** 2025-09-20 10:55:51.004330 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004339 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004349 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004359 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004369 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004379 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004388 | orchestrator | 2025-09-20 10:55:51.004398 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-20 10:55:51.004408 | orchestrator | Saturday 20 September 2025 10:45:47 +0000 (0:00:01.209) 0:00:31.253 **** 2025-09-20 10:55:51.004418 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004431 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004447 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004463 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004478 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004494 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004510 | orchestrator | 2025-09-20 10:55:51.004527 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-20 10:55:51.004544 | orchestrator | Saturday 20 September 2025 10:45:49 +0000 (0:00:01.643) 0:00:32.897 **** 2025-09-20 10:55:51.004561 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004571 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004581 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004590 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004610 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004619 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004629 | orchestrator | 2025-09-20 10:55:51.004638 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-20 10:55:51.004648 | orchestrator | Saturday 20 September 2025 10:45:50 +0000 (0:00:00.782) 0:00:33.680 **** 2025-09-20 10:55:51.004658 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004668 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004677 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004687 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004696 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004706 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004715 | orchestrator | 2025-09-20 10:55:51.004725 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-20 10:55:51.004735 | orchestrator | Saturday 20 September 2025 10:45:51 +0000 (0:00:00.776) 0:00:34.456 **** 2025-09-20 10:55:51.004745 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.004754 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.004764 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.004773 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.004783 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.004792 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.004802 | orchestrator | 2025-09-20 10:55:51.004811 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-20 10:55:51.004821 | orchestrator | Saturday 20 September 2025 10:45:51 +0000 (0:00:00.670) 0:00:35.127 **** 2025-09-20 10:55:51.004833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504', 'dm-uuid-LVM-GFTN8eCjsDhsvHcLnbBW6Hiira8lKL1udVxFf2qXf8ZmfdhnZhdqKcyB2IPw7k07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04', 'dm-uuid-LVM-6Wey4TM1haZ7gjGkCtFB3Rfa02eGaKNTX7bv20h3mT29Iv3VklR0vD9Ut9ae9rNk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be', 'dm-uuid-LVM-zmHMarQ0GeOvwHa2octALRNuK9Mtv96G2WqbUvaxLe5TQX9CF3AdvLI3zAtwFBsi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084', 'dm-uuid-LVM-CivDXxDPNlR0kW7Fk5YuJ0VOnz3lPYRaQs2J5U20gpn0B3yD0ZtOYrjPGdQ6y2jA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.004997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FHfINo-QbB8-1gtM-lHmb-aZM1-kVg4-ymeA3K', 'scsi-0QEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf', 'scsi-SQEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CcMY6h-Yvso-Pyog-AJRg-iOyn-jVml-1IjQeN', 'scsi-0QEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652', 'scsi-SQEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UsQCdZ-EDR1-kfc0-B15p-aM1k-8uJJ-f2yIAP', 'scsi-0QEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949', 'scsi-SQEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f', 'scsi-SQEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-We5Mb4-JMDz-2gCV-40VR-14de-936x-g35BLT', 'scsi-0QEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58', 'scsi-SQEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22', 'scsi-SQEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19', 'dm-uuid-LVM-NIIieIrZpMwiF4zA1j7rPvZFGNxRh5VjUBRdeW4vjom4PIIluTcQ5EkcZbkGczdj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d', 'dm-uuid-LVM-2G92QntVglL9q1MRd9Z9LPlS1Py9FnebRqBci7ddGzK8oFVFlZTRxaxGpVx2OTF1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005388 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.005398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2pvgxu-QNeD-ciqZ-JOIj-NCHU-4b2C-6GfcOT', 'scsi-0QEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b', 'scsi-SQEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vgBYvF-M13W-988Q-HWt3-20j3-qTzr-1oUxcy', 'scsi-0QEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8', 'scsi-SQEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b', 'scsi-SQEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005731 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.005739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005841 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.005849 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.005857 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.005865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:55:51.005945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:55:51.005976 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.005984 | orchestrator | 2025-09-20 10:55:51.005992 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-20 10:55:51.006001 | orchestrator | Saturday 20 September 2025 10:45:53 +0000 (0:00:01.369) 0:00:36.497 **** 2025-09-20 10:55:51.006009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504', 'dm-uuid-LVM-GFTN8eCjsDhsvHcLnbBW6Hiira8lKL1udVxFf2qXf8ZmfdhnZhdqKcyB2IPw7k07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006049 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04', 'dm-uuid-LVM-6Wey4TM1haZ7gjGkCtFB3Rfa02eGaKNTX7bv20h3mT29Iv3VklR0vD9Ut9ae9rNk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006097 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be', 'dm-uuid-LVM-zmHMarQ0GeOvwHa2octALRNuK9Mtv96G2WqbUvaxLe5TQX9CF3AdvLI3zAtwFBsi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084', 'dm-uuid-LVM-CivDXxDPNlR0kW7Fk5YuJ0VOnz3lPYRaQs2J5U20gpn0B3yD0ZtOYrjPGdQ6y2jA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006186 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19', 'dm-uuid-LVM-NIIieIrZpMwiF4zA1j7rPvZFGNxRh5VjUBRdeW4vjom4PIIluTcQ5EkcZbkGczdj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006311 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FHfINo-QbB8-1gtM-lHmb-aZM1-kVg4-ymeA3K', 'scsi-0QEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf', 'scsi-SQEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d', 'dm-uuid-LVM-2G92QntVglL9q1MRd9Z9LPlS1Py9FnebRqBci7ddGzK8oFVFlZTRxaxGpVx2OTF1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006829 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006869 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UsQCdZ-EDR1-kfc0-B15p-aM1k-8uJJ-f2yIAP', 'scsi-0QEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949', 'scsi-SQEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CcMY6h-Yvso-Pyog-AJRg-iOyn-jVml-1IjQeN', 'scsi-0QEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652', 'scsi-SQEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-We5Mb4-JMDz-2gCV-40VR-14de-936x-g35BLT', 'scsi-0QEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58', 'scsi-SQEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f', 'scsi-SQEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22', 'scsi-SQEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.006995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007046 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007060 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2pvgxu-QNeD-ciqZ-JOIj-NCHU-4b2C-6GfcOT', 'scsi-0QEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b', 'scsi-SQEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007093 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007133 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vgBYvF-M13W-988Q-HWt3-20j3-qTzr-1oUxcy', 'scsi-0QEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8', 'scsi-SQEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007155 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.007164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007172 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b', 'scsi-SQEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007199 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b575f8d-db95-47cd-bd18-28166b169c8c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007212 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007228 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007237 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007246 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007267 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007284 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007300 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007309 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007322 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a06e929-54a6-429b-9235-a8b1ff4ea0a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007332 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007340 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.007348 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.007356 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.007364 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.007383 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007400 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007419 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007432 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007446 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007487 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007511 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9a3bff6-b113-4fee-8d66-62177b4eee9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007526 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:55:51.007535 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.007543 | orchestrator | 2025-09-20 10:55:51.007552 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-20 10:55:51.007560 | orchestrator | Saturday 20 September 2025 10:45:54 +0000 (0:00:01.333) 0:00:37.830 **** 2025-09-20 10:55:51.007577 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.007590 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.007598 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.007606 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.007614 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.007622 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.007630 | orchestrator | 2025-09-20 10:55:51.007638 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-20 10:55:51.007646 | orchestrator | Saturday 20 September 2025 10:45:55 +0000 (0:00:01.219) 0:00:39.050 **** 2025-09-20 10:55:51.007654 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.007662 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.007670 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.007677 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.007685 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.007693 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.007701 | orchestrator | 2025-09-20 10:55:51.007709 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 10:55:51.007717 | orchestrator | Saturday 20 September 2025 10:45:56 +0000 (0:00:00.464) 0:00:39.514 **** 2025-09-20 10:55:51.007725 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.007733 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.007741 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.007749 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.007756 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.007764 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.007772 | orchestrator | 2025-09-20 10:55:51.007780 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 10:55:51.007788 | orchestrator | Saturday 20 September 2025 10:45:57 +0000 (0:00:00.910) 0:00:40.425 **** 2025-09-20 10:55:51.007796 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.007804 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.007812 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.007819 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.007831 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.007844 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.007857 | orchestrator | 2025-09-20 10:55:51.007870 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 10:55:51.007882 | orchestrator | Saturday 20 September 2025 10:45:58 +0000 (0:00:01.501) 0:00:41.927 **** 2025-09-20 10:55:51.007895 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.007907 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.007920 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.007933 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.007945 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.007959 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.007972 | orchestrator | 2025-09-20 10:55:51.007985 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 10:55:51.007998 | orchestrator | Saturday 20 September 2025 10:46:00 +0000 (0:00:01.562) 0:00:43.490 **** 2025-09-20 10:55:51.008012 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008026 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.008039 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.008053 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.008062 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.008069 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.008077 | orchestrator | 2025-09-20 10:55:51.008085 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-20 10:55:51.008093 | orchestrator | Saturday 20 September 2025 10:46:02 +0000 (0:00:02.166) 0:00:45.656 **** 2025-09-20 10:55:51.008154 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-20 10:55:51.008164 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-20 10:55:51.008172 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-20 10:55:51.008180 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-20 10:55:51.008199 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 10:55:51.008207 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-20 10:55:51.008215 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-20 10:55:51.008223 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-20 10:55:51.008231 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-20 10:55:51.008239 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-20 10:55:51.008247 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-20 10:55:51.008255 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-20 10:55:51.008263 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-20 10:55:51.008271 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-20 10:55:51.008278 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-20 10:55:51.008286 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-20 10:55:51.008294 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-20 10:55:51.008302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-20 10:55:51.008310 | orchestrator | 2025-09-20 10:55:51.008318 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-20 10:55:51.008327 | orchestrator | Saturday 20 September 2025 10:46:06 +0000 (0:00:03.671) 0:00:49.328 **** 2025-09-20 10:55:51.008335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 10:55:51.008343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 10:55:51.008351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 10:55:51.008359 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-20 10:55:51.008374 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-20 10:55:51.008382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-20 10:55:51.008390 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.008398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-20 10:55:51.008406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-20 10:55:51.008427 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-20 10:55:51.008442 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.008455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 10:55:51.008467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 10:55:51.008480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 10:55:51.008494 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-20 10:55:51.008507 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-20 10:55:51.008520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-20 10:55:51.008529 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.008536 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.008544 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-20 10:55:51.008552 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-20 10:55:51.008560 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-20 10:55:51.008567 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.008575 | orchestrator | 2025-09-20 10:55:51.008583 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-20 10:55:51.008591 | orchestrator | Saturday 20 September 2025 10:46:07 +0000 (0:00:01.066) 0:00:50.395 **** 2025-09-20 10:55:51.008598 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.008606 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.008614 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.008622 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.008636 | orchestrator | 2025-09-20 10:55:51.008645 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-20 10:55:51.008653 | orchestrator | Saturday 20 September 2025 10:46:08 +0000 (0:00:01.271) 0:00:51.666 **** 2025-09-20 10:55:51.008661 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008669 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.008677 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.008684 | orchestrator | 2025-09-20 10:55:51.008691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-20 10:55:51.008697 | orchestrator | Saturday 20 September 2025 10:46:08 +0000 (0:00:00.372) 0:00:52.038 **** 2025-09-20 10:55:51.008704 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008711 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.008717 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.008724 | orchestrator | 2025-09-20 10:55:51.008731 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-20 10:55:51.008737 | orchestrator | Saturday 20 September 2025 10:46:09 +0000 (0:00:00.391) 0:00:52.430 **** 2025-09-20 10:55:51.008744 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008751 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.008757 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.008764 | orchestrator | 2025-09-20 10:55:51.008771 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-20 10:55:51.008778 | orchestrator | Saturday 20 September 2025 10:46:09 +0000 (0:00:00.859) 0:00:53.289 **** 2025-09-20 10:55:51.008784 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.008791 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.008797 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.008804 | orchestrator | 2025-09-20 10:55:51.008811 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-20 10:55:51.008817 | orchestrator | Saturday 20 September 2025 10:46:10 +0000 (0:00:00.525) 0:00:53.814 **** 2025-09-20 10:55:51.008824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.008831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.008837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.008844 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008851 | orchestrator | 2025-09-20 10:55:51.008858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-20 10:55:51.008864 | orchestrator | Saturday 20 September 2025 10:46:10 +0000 (0:00:00.430) 0:00:54.245 **** 2025-09-20 10:55:51.008871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.008878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.008884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.008891 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008897 | orchestrator | 2025-09-20 10:55:51.008904 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-20 10:55:51.008911 | orchestrator | Saturday 20 September 2025 10:46:11 +0000 (0:00:00.580) 0:00:54.825 **** 2025-09-20 10:55:51.008918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.008924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.008931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.008937 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.008944 | orchestrator | 2025-09-20 10:55:51.008951 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-20 10:55:51.008957 | orchestrator | Saturday 20 September 2025 10:46:12 +0000 (0:00:00.562) 0:00:55.388 **** 2025-09-20 10:55:51.008964 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.008971 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.008981 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.008988 | orchestrator | 2025-09-20 10:55:51.008995 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-20 10:55:51.009001 | orchestrator | Saturday 20 September 2025 10:46:12 +0000 (0:00:00.434) 0:00:55.822 **** 2025-09-20 10:55:51.009008 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 10:55:51.009015 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-20 10:55:51.009021 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-20 10:55:51.009028 | orchestrator | 2025-09-20 10:55:51.009042 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-20 10:55:51.009051 | orchestrator | Saturday 20 September 2025 10:46:14 +0000 (0:00:01.597) 0:00:57.420 **** 2025-09-20 10:55:51.009062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:55:51.009074 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:55:51.009085 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:55:51.009096 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 10:55:51.009126 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 10:55:51.009137 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 10:55:51.009148 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 10:55:51.009159 | orchestrator | 2025-09-20 10:55:51.009171 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-20 10:55:51.009181 | orchestrator | Saturday 20 September 2025 10:46:15 +0000 (0:00:01.242) 0:00:58.663 **** 2025-09-20 10:55:51.009188 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:55:51.009195 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:55:51.009201 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:55:51.009208 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 10:55:51.009215 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 10:55:51.009222 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 10:55:51.009232 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 10:55:51.009243 | orchestrator | 2025-09-20 10:55:51.009254 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.009266 | orchestrator | Saturday 20 September 2025 10:46:17 +0000 (0:00:02.302) 0:01:00.966 **** 2025-09-20 10:55:51.009277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.009289 | orchestrator | 2025-09-20 10:55:51.009300 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.009307 | orchestrator | Saturday 20 September 2025 10:46:19 +0000 (0:00:02.229) 0:01:03.195 **** 2025-09-20 10:55:51.009314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.009321 | orchestrator | 2025-09-20 10:55:51.009328 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.009334 | orchestrator | Saturday 20 September 2025 10:46:21 +0000 (0:00:01.483) 0:01:04.678 **** 2025-09-20 10:55:51.009341 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.009348 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.009355 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.009361 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.009368 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.009381 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.009388 | orchestrator | 2025-09-20 10:55:51.009394 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.009401 | orchestrator | Saturday 20 September 2025 10:46:23 +0000 (0:00:02.016) 0:01:06.695 **** 2025-09-20 10:55:51.009408 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.009415 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.009421 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.009431 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.009443 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.009454 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.009466 | orchestrator | 2025-09-20 10:55:51.009477 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.009490 | orchestrator | Saturday 20 September 2025 10:46:25 +0000 (0:00:01.670) 0:01:08.366 **** 2025-09-20 10:55:51.009497 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.009504 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.009511 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.009517 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.009524 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.009531 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.009538 | orchestrator | 2025-09-20 10:55:51.009544 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.009551 | orchestrator | Saturday 20 September 2025 10:46:26 +0000 (0:00:00.962) 0:01:09.328 **** 2025-09-20 10:55:51.009558 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.009565 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.009572 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.009578 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.009585 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.009591 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.009598 | orchestrator | 2025-09-20 10:55:51.009605 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.009612 | orchestrator | Saturday 20 September 2025 10:46:26 +0000 (0:00:00.791) 0:01:10.119 **** 2025-09-20 10:55:51.009618 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.009625 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.009632 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.009639 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.009645 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.009652 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.009659 | orchestrator | 2025-09-20 10:55:51.009666 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.009682 | orchestrator | Saturday 20 September 2025 10:46:27 +0000 (0:00:01.108) 0:01:11.228 **** 2025-09-20 10:55:51.009690 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.009696 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.009703 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.009710 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.009716 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.009723 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.009729 | orchestrator | 2025-09-20 10:55:51.009736 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.009743 | orchestrator | Saturday 20 September 2025 10:46:28 +0000 (0:00:00.604) 0:01:11.832 **** 2025-09-20 10:55:51.009749 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.009756 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.009763 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.009769 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.009776 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.009782 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.009789 | orchestrator | 2025-09-20 10:55:51.009795 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.009802 | orchestrator | Saturday 20 September 2025 10:46:29 +0000 (0:00:00.939) 0:01:12.772 **** 2025-09-20 10:55:51.009813 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.009820 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.009827 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.009833 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.009840 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.009846 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.009853 | orchestrator | 2025-09-20 10:55:51.009860 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.009866 | orchestrator | Saturday 20 September 2025 10:46:30 +0000 (0:00:01.076) 0:01:13.848 **** 2025-09-20 10:55:51.009873 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.009879 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.009886 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.009892 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.009899 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.009905 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.009911 | orchestrator | 2025-09-20 10:55:51.009918 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.009925 | orchestrator | Saturday 20 September 2025 10:46:31 +0000 (0:00:01.231) 0:01:15.080 **** 2025-09-20 10:55:51.009931 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.009938 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.009945 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.009951 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.009958 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.009964 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.009971 | orchestrator | 2025-09-20 10:55:51.009977 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.009984 | orchestrator | Saturday 20 September 2025 10:46:32 +0000 (0:00:00.943) 0:01:16.023 **** 2025-09-20 10:55:51.009990 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.009997 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.010003 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.010010 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.010063 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.010070 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.010077 | orchestrator | 2025-09-20 10:55:51.010084 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.010091 | orchestrator | Saturday 20 September 2025 10:46:33 +0000 (0:00:00.548) 0:01:16.571 **** 2025-09-20 10:55:51.010098 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.010124 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.010131 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.010137 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010144 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010151 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010157 | orchestrator | 2025-09-20 10:55:51.010164 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.010171 | orchestrator | Saturday 20 September 2025 10:46:33 +0000 (0:00:00.622) 0:01:17.194 **** 2025-09-20 10:55:51.010178 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.010184 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.010191 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.010198 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010204 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010211 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010218 | orchestrator | 2025-09-20 10:55:51.010224 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.010231 | orchestrator | Saturday 20 September 2025 10:46:34 +0000 (0:00:00.618) 0:01:17.812 **** 2025-09-20 10:55:51.010238 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.010245 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.010251 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.010258 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010270 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010276 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010283 | orchestrator | 2025-09-20 10:55:51.010290 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.010297 | orchestrator | Saturday 20 September 2025 10:46:35 +0000 (0:00:00.855) 0:01:18.667 **** 2025-09-20 10:55:51.010303 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.010310 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.010320 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.010332 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010343 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010353 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010365 | orchestrator | 2025-09-20 10:55:51.010375 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.010386 | orchestrator | Saturday 20 September 2025 10:46:35 +0000 (0:00:00.542) 0:01:19.210 **** 2025-09-20 10:55:51.010397 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.010408 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.010420 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.010432 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010443 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010454 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010465 | orchestrator | 2025-09-20 10:55:51.010503 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.010512 | orchestrator | Saturday 20 September 2025 10:46:36 +0000 (0:00:00.694) 0:01:19.905 **** 2025-09-20 10:55:51.010519 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.010526 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.010533 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.010539 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.010546 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.010553 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.010560 | orchestrator | 2025-09-20 10:55:51.010566 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.010573 | orchestrator | Saturday 20 September 2025 10:46:37 +0000 (0:00:00.806) 0:01:20.711 **** 2025-09-20 10:55:51.010580 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.010587 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.010593 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.010600 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.010606 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.010613 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.010619 | orchestrator | 2025-09-20 10:55:51.010626 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.010633 | orchestrator | Saturday 20 September 2025 10:46:38 +0000 (0:00:01.058) 0:01:21.770 **** 2025-09-20 10:55:51.010640 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.010646 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.010653 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.010659 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.010666 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.010673 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.010679 | orchestrator | 2025-09-20 10:55:51.010686 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-20 10:55:51.010693 | orchestrator | Saturday 20 September 2025 10:46:39 +0000 (0:00:01.250) 0:01:23.020 **** 2025-09-20 10:55:51.010699 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.010706 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.010713 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.010719 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.010726 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.010732 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.010739 | orchestrator | 2025-09-20 10:55:51.010746 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-20 10:55:51.010780 | orchestrator | Saturday 20 September 2025 10:46:41 +0000 (0:00:01.377) 0:01:24.398 **** 2025-09-20 10:55:51.010788 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.010794 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.010801 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.010807 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.010814 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.010821 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.010827 | orchestrator | 2025-09-20 10:55:51.010834 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-20 10:55:51.010841 | orchestrator | Saturday 20 September 2025 10:46:44 +0000 (0:00:03.061) 0:01:27.460 **** 2025-09-20 10:55:51.010847 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.010854 | orchestrator | 2025-09-20 10:55:51.010861 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-20 10:55:51.010868 | orchestrator | Saturday 20 September 2025 10:46:45 +0000 (0:00:01.184) 0:01:28.644 **** 2025-09-20 10:55:51.010874 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.010881 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.010887 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.010894 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010901 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010907 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010914 | orchestrator | 2025-09-20 10:55:51.010920 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-20 10:55:51.010927 | orchestrator | Saturday 20 September 2025 10:46:45 +0000 (0:00:00.584) 0:01:29.228 **** 2025-09-20 10:55:51.010934 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.010940 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.010947 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.010953 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.010960 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.010967 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.010973 | orchestrator | 2025-09-20 10:55:51.010980 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-20 10:55:51.010986 | orchestrator | Saturday 20 September 2025 10:46:46 +0000 (0:00:00.835) 0:01:30.063 **** 2025-09-20 10:55:51.010993 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 10:55:51.011000 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 10:55:51.011006 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 10:55:51.011013 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 10:55:51.011020 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 10:55:51.011026 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 10:55:51.011033 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 10:55:51.011039 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 10:55:51.011046 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 10:55:51.011052 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 10:55:51.011059 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 10:55:51.011074 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 10:55:51.011081 | orchestrator | 2025-09-20 10:55:51.011088 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-20 10:55:51.011143 | orchestrator | Saturday 20 September 2025 10:46:48 +0000 (0:00:01.291) 0:01:31.355 **** 2025-09-20 10:55:51.011153 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.011160 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.011166 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.011173 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.011180 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.011186 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.011193 | orchestrator | 2025-09-20 10:55:51.011200 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-20 10:55:51.011207 | orchestrator | Saturday 20 September 2025 10:46:49 +0000 (0:00:01.195) 0:01:32.551 **** 2025-09-20 10:55:51.011214 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011220 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011227 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011233 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011240 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011246 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011253 | orchestrator | 2025-09-20 10:55:51.011260 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-20 10:55:51.011266 | orchestrator | Saturday 20 September 2025 10:46:49 +0000 (0:00:00.584) 0:01:33.135 **** 2025-09-20 10:55:51.011273 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011280 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011286 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011293 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011299 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011306 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011313 | orchestrator | 2025-09-20 10:55:51.011319 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-20 10:55:51.011326 | orchestrator | Saturday 20 September 2025 10:46:50 +0000 (0:00:00.773) 0:01:33.908 **** 2025-09-20 10:55:51.011333 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011339 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011346 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011359 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011365 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011372 | orchestrator | 2025-09-20 10:55:51.011379 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-20 10:55:51.011385 | orchestrator | Saturday 20 September 2025 10:46:51 +0000 (0:00:00.600) 0:01:34.509 **** 2025-09-20 10:55:51.011393 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.011399 | orchestrator | 2025-09-20 10:55:51.011406 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-20 10:55:51.011413 | orchestrator | Saturday 20 September 2025 10:46:52 +0000 (0:00:01.231) 0:01:35.741 **** 2025-09-20 10:55:51.011420 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.011428 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.011440 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.011451 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.011462 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.011472 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.011484 | orchestrator | 2025-09-20 10:55:51.011495 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-20 10:55:51.011506 | orchestrator | Saturday 20 September 2025 10:47:41 +0000 (0:00:49.531) 0:02:25.272 **** 2025-09-20 10:55:51.011517 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 10:55:51.011528 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 10:55:51.011539 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 10:55:51.011558 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011570 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 10:55:51.011581 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 10:55:51.011593 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 10:55:51.011604 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011616 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 10:55:51.011625 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 10:55:51.011631 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 10:55:51.011638 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011645 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 10:55:51.011651 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 10:55:51.011657 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 10:55:51.011663 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011670 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 10:55:51.011676 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 10:55:51.011682 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 10:55:51.011688 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011694 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 10:55:51.011705 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 10:55:51.011716 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 10:55:51.011722 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011728 | orchestrator | 2025-09-20 10:55:51.011735 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-20 10:55:51.011741 | orchestrator | Saturday 20 September 2025 10:47:42 +0000 (0:00:00.742) 0:02:26.015 **** 2025-09-20 10:55:51.011747 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011753 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011759 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011766 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011772 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011778 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011784 | orchestrator | 2025-09-20 10:55:51.011790 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-20 10:55:51.011797 | orchestrator | Saturday 20 September 2025 10:47:43 +0000 (0:00:00.682) 0:02:26.698 **** 2025-09-20 10:55:51.011803 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011809 | orchestrator | 2025-09-20 10:55:51.011815 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-20 10:55:51.011821 | orchestrator | Saturday 20 September 2025 10:47:43 +0000 (0:00:00.130) 0:02:26.829 **** 2025-09-20 10:55:51.011827 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011834 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011840 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011846 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011852 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011858 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011864 | orchestrator | 2025-09-20 10:55:51.011871 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-20 10:55:51.011877 | orchestrator | Saturday 20 September 2025 10:47:44 +0000 (0:00:00.516) 0:02:27.345 **** 2025-09-20 10:55:51.011883 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011889 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011900 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011907 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011913 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011919 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011925 | orchestrator | 2025-09-20 10:55:51.011931 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-20 10:55:51.011938 | orchestrator | Saturday 20 September 2025 10:47:44 +0000 (0:00:00.634) 0:02:27.980 **** 2025-09-20 10:55:51.011944 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.011950 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.011956 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.011962 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.011968 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.011974 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.011980 | orchestrator | 2025-09-20 10:55:51.011987 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-20 10:55:51.011993 | orchestrator | Saturday 20 September 2025 10:47:45 +0000 (0:00:00.433) 0:02:28.414 **** 2025-09-20 10:55:51.011999 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.012005 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.012012 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.012018 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.012024 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.012030 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.012036 | orchestrator | 2025-09-20 10:55:51.012043 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-20 10:55:51.012049 | orchestrator | Saturday 20 September 2025 10:47:47 +0000 (0:00:02.643) 0:02:31.058 **** 2025-09-20 10:55:51.012055 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.012061 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.012067 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.012073 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.012080 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.012086 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.012092 | orchestrator | 2025-09-20 10:55:51.012115 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-20 10:55:51.012123 | orchestrator | Saturday 20 September 2025 10:47:48 +0000 (0:00:00.547) 0:02:31.606 **** 2025-09-20 10:55:51.012130 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.012137 | orchestrator | 2025-09-20 10:55:51.012144 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-20 10:55:51.012150 | orchestrator | Saturday 20 September 2025 10:47:49 +0000 (0:00:01.165) 0:02:32.771 **** 2025-09-20 10:55:51.012156 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012162 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012168 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012174 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012181 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012187 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012193 | orchestrator | 2025-09-20 10:55:51.012199 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-20 10:55:51.012206 | orchestrator | Saturday 20 September 2025 10:47:50 +0000 (0:00:00.748) 0:02:33.520 **** 2025-09-20 10:55:51.012212 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012218 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012224 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012230 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012236 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012242 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012248 | orchestrator | 2025-09-20 10:55:51.012254 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-20 10:55:51.012260 | orchestrator | Saturday 20 September 2025 10:47:50 +0000 (0:00:00.580) 0:02:34.100 **** 2025-09-20 10:55:51.012271 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012277 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012283 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012289 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012296 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012309 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012315 | orchestrator | 2025-09-20 10:55:51.012322 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-20 10:55:51.012328 | orchestrator | Saturday 20 September 2025 10:47:51 +0000 (0:00:00.545) 0:02:34.646 **** 2025-09-20 10:55:51.012334 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012340 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012346 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012358 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012364 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012370 | orchestrator | 2025-09-20 10:55:51.012377 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-20 10:55:51.012383 | orchestrator | Saturday 20 September 2025 10:47:52 +0000 (0:00:00.757) 0:02:35.403 **** 2025-09-20 10:55:51.012389 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012395 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012401 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012407 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012413 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012419 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012426 | orchestrator | 2025-09-20 10:55:51.012437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-20 10:55:51.012448 | orchestrator | Saturday 20 September 2025 10:47:52 +0000 (0:00:00.573) 0:02:35.977 **** 2025-09-20 10:55:51.012458 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012468 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012479 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012489 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012498 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012504 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012510 | orchestrator | 2025-09-20 10:55:51.012516 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-20 10:55:51.012523 | orchestrator | Saturday 20 September 2025 10:47:53 +0000 (0:00:00.713) 0:02:36.690 **** 2025-09-20 10:55:51.012529 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012535 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012541 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012547 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012554 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012560 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012566 | orchestrator | 2025-09-20 10:55:51.012572 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-20 10:55:51.012579 | orchestrator | Saturday 20 September 2025 10:47:53 +0000 (0:00:00.580) 0:02:37.271 **** 2025-09-20 10:55:51.012585 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.012591 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.012597 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.012603 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.012609 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.012619 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.012629 | orchestrator | 2025-09-20 10:55:51.012639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-20 10:55:51.012649 | orchestrator | Saturday 20 September 2025 10:47:54 +0000 (0:00:00.669) 0:02:37.940 **** 2025-09-20 10:55:51.012659 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.012669 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.012698 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.012708 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.012719 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.012730 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.012737 | orchestrator | 2025-09-20 10:55:51.012743 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-20 10:55:51.012750 | orchestrator | Saturday 20 September 2025 10:47:55 +0000 (0:00:01.189) 0:02:39.129 **** 2025-09-20 10:55:51.012756 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.012762 | orchestrator | 2025-09-20 10:55:51.012769 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-20 10:55:51.012775 | orchestrator | Saturday 20 September 2025 10:47:56 +0000 (0:00:01.096) 0:02:40.226 **** 2025-09-20 10:55:51.012781 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-20 10:55:51.012788 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-20 10:55:51.012794 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-20 10:55:51.012800 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-20 10:55:51.012807 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-20 10:55:51.012813 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-20 10:55:51.012819 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-20 10:55:51.012825 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-20 10:55:51.012832 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-20 10:55:51.012838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-20 10:55:51.012844 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-20 10:55:51.012851 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-20 10:55:51.012857 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-20 10:55:51.012863 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-20 10:55:51.012869 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-20 10:55:51.012876 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-20 10:55:51.012882 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-20 10:55:51.012888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-20 10:55:51.012894 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-20 10:55:51.012901 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-20 10:55:51.012915 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-20 10:55:51.012922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-20 10:55:51.012928 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-20 10:55:51.012934 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-20 10:55:51.012940 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-20 10:55:51.012946 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-20 10:55:51.012952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-20 10:55:51.012958 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-20 10:55:51.012965 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-20 10:55:51.012975 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-20 10:55:51.012984 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-20 10:55:51.013000 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-20 10:55:51.013011 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-20 10:55:51.013022 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-20 10:55:51.013038 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-20 10:55:51.013048 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-20 10:55:51.013058 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-20 10:55:51.013067 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-20 10:55:51.013076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-20 10:55:51.013084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-20 10:55:51.013093 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-20 10:55:51.013121 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-20 10:55:51.013131 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-20 10:55:51.013140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-20 10:55:51.013149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-20 10:55:51.013158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-20 10:55:51.013167 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 10:55:51.013176 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 10:55:51.013185 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 10:55:51.013194 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-20 10:55:51.013204 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 10:55:51.013213 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-20 10:55:51.013223 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 10:55:51.013232 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 10:55:51.013242 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 10:55:51.013251 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 10:55:51.013261 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 10:55:51.013271 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 10:55:51.013280 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 10:55:51.013291 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 10:55:51.013301 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 10:55:51.013312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 10:55:51.013321 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 10:55:51.013332 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 10:55:51.013342 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 10:55:51.013351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 10:55:51.013360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 10:55:51.013370 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 10:55:51.013379 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 10:55:51.013388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 10:55:51.013398 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 10:55:51.013408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 10:55:51.013418 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 10:55:51.013428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 10:55:51.013438 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 10:55:51.013454 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 10:55:51.013464 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 10:55:51.013473 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 10:55:51.013499 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 10:55:51.013510 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 10:55:51.013520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 10:55:51.013531 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-20 10:55:51.013541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 10:55:51.013551 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-20 10:55:51.013560 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-20 10:55:51.013569 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 10:55:51.013577 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-20 10:55:51.013586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 10:55:51.013595 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-20 10:55:51.013604 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-20 10:55:51.013613 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-20 10:55:51.013622 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-20 10:55:51.013631 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-20 10:55:51.013642 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-20 10:55:51.013653 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-20 10:55:51.013662 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-20 10:55:51.013672 | orchestrator | 2025-09-20 10:55:51.013682 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-20 10:55:51.013692 | orchestrator | Saturday 20 September 2025 10:48:03 +0000 (0:00:06.375) 0:02:46.601 **** 2025-09-20 10:55:51.013702 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.013712 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.013722 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.013732 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.013743 | orchestrator | 2025-09-20 10:55:51.013753 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-20 10:55:51.013763 | orchestrator | Saturday 20 September 2025 10:48:04 +0000 (0:00:01.027) 0:02:47.629 **** 2025-09-20 10:55:51.013773 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.013784 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.013796 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.013803 | orchestrator | 2025-09-20 10:55:51.013809 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-20 10:55:51.013815 | orchestrator | Saturday 20 September 2025 10:48:04 +0000 (0:00:00.654) 0:02:48.283 **** 2025-09-20 10:55:51.013822 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.013828 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.013844 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.013851 | orchestrator | 2025-09-20 10:55:51.013857 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-20 10:55:51.013863 | orchestrator | Saturday 20 September 2025 10:48:06 +0000 (0:00:01.399) 0:02:49.683 **** 2025-09-20 10:55:51.013869 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.013876 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.013882 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.013889 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.013895 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.013901 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.013907 | orchestrator | 2025-09-20 10:55:51.013914 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-20 10:55:51.013920 | orchestrator | Saturday 20 September 2025 10:48:06 +0000 (0:00:00.569) 0:02:50.252 **** 2025-09-20 10:55:51.013926 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.013933 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.013939 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.013946 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.013957 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.013966 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.013976 | orchestrator | 2025-09-20 10:55:51.013985 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-20 10:55:51.013996 | orchestrator | Saturday 20 September 2025 10:48:07 +0000 (0:00:00.899) 0:02:51.151 **** 2025-09-20 10:55:51.014006 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014052 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014064 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014076 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014086 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014096 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014126 | orchestrator | 2025-09-20 10:55:51.014138 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-20 10:55:51.014149 | orchestrator | Saturday 20 September 2025 10:48:08 +0000 (0:00:00.585) 0:02:51.737 **** 2025-09-20 10:55:51.014180 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014191 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014201 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014210 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014219 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014228 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014238 | orchestrator | 2025-09-20 10:55:51.014248 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-20 10:55:51.014257 | orchestrator | Saturday 20 September 2025 10:48:09 +0000 (0:00:01.117) 0:02:52.855 **** 2025-09-20 10:55:51.014266 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014277 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014287 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014297 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014307 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014317 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014327 | orchestrator | 2025-09-20 10:55:51.014338 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-20 10:55:51.014346 | orchestrator | Saturday 20 September 2025 10:48:10 +0000 (0:00:00.770) 0:02:53.625 **** 2025-09-20 10:55:51.014352 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014358 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014364 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014370 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014376 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014382 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014388 | orchestrator | 2025-09-20 10:55:51.014402 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-20 10:55:51.014408 | orchestrator | Saturday 20 September 2025 10:48:11 +0000 (0:00:00.725) 0:02:54.350 **** 2025-09-20 10:55:51.014414 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014421 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014427 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014433 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014439 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014445 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014451 | orchestrator | 2025-09-20 10:55:51.014457 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-20 10:55:51.014464 | orchestrator | Saturday 20 September 2025 10:48:12 +0000 (0:00:01.057) 0:02:55.407 **** 2025-09-20 10:55:51.014470 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014476 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014482 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014489 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014495 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014501 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014507 | orchestrator | 2025-09-20 10:55:51.014513 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-20 10:55:51.014520 | orchestrator | Saturday 20 September 2025 10:48:12 +0000 (0:00:00.637) 0:02:56.045 **** 2025-09-20 10:55:51.014526 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014532 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014538 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014544 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.014551 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.014557 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.014563 | orchestrator | 2025-09-20 10:55:51.014570 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-20 10:55:51.014576 | orchestrator | Saturday 20 September 2025 10:48:15 +0000 (0:00:02.720) 0:02:58.766 **** 2025-09-20 10:55:51.014582 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.014588 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.014595 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014601 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014607 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.014613 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014620 | orchestrator | 2025-09-20 10:55:51.014626 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-20 10:55:51.014632 | orchestrator | Saturday 20 September 2025 10:48:16 +0000 (0:00:00.970) 0:02:59.736 **** 2025-09-20 10:55:51.014638 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.014645 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.014651 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.014657 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014663 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014669 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014676 | orchestrator | 2025-09-20 10:55:51.014682 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-20 10:55:51.014688 | orchestrator | Saturday 20 September 2025 10:48:17 +0000 (0:00:00.947) 0:03:00.684 **** 2025-09-20 10:55:51.014694 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014700 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014707 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014713 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014719 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014725 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014731 | orchestrator | 2025-09-20 10:55:51.014737 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-20 10:55:51.014744 | orchestrator | Saturday 20 September 2025 10:48:18 +0000 (0:00:00.652) 0:03:01.337 **** 2025-09-20 10:55:51.014755 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.014762 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.014768 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.014775 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014781 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014787 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014793 | orchestrator | 2025-09-20 10:55:51.014810 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-20 10:55:51.014817 | orchestrator | Saturday 20 September 2025 10:48:18 +0000 (0:00:00.704) 0:03:02.042 **** 2025-09-20 10:55:51.014825 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-20 10:55:51.014834 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-20 10:55:51.014841 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014848 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-20 10:55:51.014855 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-20 10:55:51.014861 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014867 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-20 10:55:51.014874 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-20 10:55:51.014880 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014887 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014893 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014899 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014905 | orchestrator | 2025-09-20 10:55:51.014911 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-20 10:55:51.014918 | orchestrator | Saturday 20 September 2025 10:48:19 +0000 (0:00:00.582) 0:03:02.624 **** 2025-09-20 10:55:51.014924 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014931 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014937 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.014943 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.014949 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.014955 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.014966 | orchestrator | 2025-09-20 10:55:51.014972 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-20 10:55:51.014979 | orchestrator | Saturday 20 September 2025 10:48:20 +0000 (0:00:00.717) 0:03:03.342 **** 2025-09-20 10:55:51.014985 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.014991 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.014997 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.015003 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015010 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015016 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015022 | orchestrator | 2025-09-20 10:55:51.015028 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-20 10:55:51.015035 | orchestrator | Saturday 20 September 2025 10:48:20 +0000 (0:00:00.572) 0:03:03.915 **** 2025-09-20 10:55:51.015041 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015047 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.015053 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.015060 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015066 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015072 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015079 | orchestrator | 2025-09-20 10:55:51.015085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-20 10:55:51.015091 | orchestrator | Saturday 20 September 2025 10:48:21 +0000 (0:00:01.262) 0:03:05.177 **** 2025-09-20 10:55:51.015097 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015150 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.015157 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015163 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.015169 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015175 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015182 | orchestrator | 2025-09-20 10:55:51.015188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-20 10:55:51.015194 | orchestrator | Saturday 20 September 2025 10:48:22 +0000 (0:00:00.785) 0:03:05.963 **** 2025-09-20 10:55:51.015201 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015215 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.015222 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.015228 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015234 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015241 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015247 | orchestrator | 2025-09-20 10:55:51.015253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-20 10:55:51.015260 | orchestrator | Saturday 20 September 2025 10:48:23 +0000 (0:00:00.945) 0:03:06.909 **** 2025-09-20 10:55:51.015266 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.015273 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.015279 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.015285 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015291 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015298 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015304 | orchestrator | 2025-09-20 10:55:51.015310 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-20 10:55:51.015316 | orchestrator | Saturday 20 September 2025 10:48:24 +0000 (0:00:00.988) 0:03:07.897 **** 2025-09-20 10:55:51.015323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.015329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.015336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.015342 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015348 | orchestrator | 2025-09-20 10:55:51.015354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-20 10:55:51.015361 | orchestrator | Saturday 20 September 2025 10:48:25 +0000 (0:00:00.591) 0:03:08.488 **** 2025-09-20 10:55:51.015371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.015378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.015384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.015390 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015396 | orchestrator | 2025-09-20 10:55:51.015403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-20 10:55:51.015409 | orchestrator | Saturday 20 September 2025 10:48:25 +0000 (0:00:00.533) 0:03:09.022 **** 2025-09-20 10:55:51.015415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.015421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.015428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.015434 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015440 | orchestrator | 2025-09-20 10:55:51.015447 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-20 10:55:51.015453 | orchestrator | Saturday 20 September 2025 10:48:26 +0000 (0:00:00.699) 0:03:09.722 **** 2025-09-20 10:55:51.015459 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.015466 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.015472 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.015478 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015484 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015491 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015497 | orchestrator | 2025-09-20 10:55:51.015503 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-20 10:55:51.015509 | orchestrator | Saturday 20 September 2025 10:48:27 +0000 (0:00:00.628) 0:03:10.350 **** 2025-09-20 10:55:51.015516 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-20 10:55:51.015522 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-20 10:55:51.015528 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015534 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 10:55:51.015541 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-20 10:55:51.015547 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015553 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-20 10:55:51.015559 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-20 10:55:51.015565 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015572 | orchestrator | 2025-09-20 10:55:51.015578 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-20 10:55:51.015584 | orchestrator | Saturday 20 September 2025 10:48:29 +0000 (0:00:01.976) 0:03:12.327 **** 2025-09-20 10:55:51.015590 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.015597 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.015603 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.015609 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.015615 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.015621 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.015627 | orchestrator | 2025-09-20 10:55:51.015634 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 10:55:51.015640 | orchestrator | Saturday 20 September 2025 10:48:31 +0000 (0:00:02.544) 0:03:14.872 **** 2025-09-20 10:55:51.015646 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.015653 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.015659 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.015665 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.015671 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.015676 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.015681 | orchestrator | 2025-09-20 10:55:51.015687 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-20 10:55:51.015693 | orchestrator | Saturday 20 September 2025 10:48:33 +0000 (0:00:02.183) 0:03:17.055 **** 2025-09-20 10:55:51.015702 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015707 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.015713 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.015718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.015724 | orchestrator | 2025-09-20 10:55:51.015729 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-20 10:55:51.015735 | orchestrator | Saturday 20 September 2025 10:48:34 +0000 (0:00:00.833) 0:03:17.889 **** 2025-09-20 10:55:51.015740 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.015746 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.015751 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.015757 | orchestrator | 2025-09-20 10:55:51.015769 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-20 10:55:51.015775 | orchestrator | Saturday 20 September 2025 10:48:34 +0000 (0:00:00.239) 0:03:18.128 **** 2025-09-20 10:55:51.015780 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.015786 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.015791 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.015797 | orchestrator | 2025-09-20 10:55:51.015802 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-20 10:55:51.015808 | orchestrator | Saturday 20 September 2025 10:48:35 +0000 (0:00:01.101) 0:03:19.229 **** 2025-09-20 10:55:51.015813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 10:55:51.015819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 10:55:51.015824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 10:55:51.015830 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015835 | orchestrator | 2025-09-20 10:55:51.015841 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-20 10:55:51.015846 | orchestrator | Saturday 20 September 2025 10:48:36 +0000 (0:00:00.757) 0:03:19.987 **** 2025-09-20 10:55:51.015852 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.015857 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.015863 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.015868 | orchestrator | 2025-09-20 10:55:51.015873 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-20 10:55:51.015879 | orchestrator | Saturday 20 September 2025 10:48:37 +0000 (0:00:00.388) 0:03:20.375 **** 2025-09-20 10:55:51.015884 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.015890 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.015895 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.015901 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.015906 | orchestrator | 2025-09-20 10:55:51.015912 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-20 10:55:51.015917 | orchestrator | Saturday 20 September 2025 10:48:38 +0000 (0:00:00.959) 0:03:21.335 **** 2025-09-20 10:55:51.015923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.015928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.015934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.015939 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015945 | orchestrator | 2025-09-20 10:55:51.015950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-20 10:55:51.015956 | orchestrator | Saturday 20 September 2025 10:48:38 +0000 (0:00:00.353) 0:03:21.689 **** 2025-09-20 10:55:51.015961 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.015967 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.015972 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.015977 | orchestrator | 2025-09-20 10:55:51.015983 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-20 10:55:51.015988 | orchestrator | Saturday 20 September 2025 10:48:38 +0000 (0:00:00.553) 0:03:22.242 **** 2025-09-20 10:55:51.015998 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016003 | orchestrator | 2025-09-20 10:55:51.016009 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-20 10:55:51.016014 | orchestrator | Saturday 20 September 2025 10:48:39 +0000 (0:00:00.232) 0:03:22.474 **** 2025-09-20 10:55:51.016020 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016025 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.016030 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.016036 | orchestrator | 2025-09-20 10:55:51.016042 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-20 10:55:51.016047 | orchestrator | Saturday 20 September 2025 10:48:39 +0000 (0:00:00.287) 0:03:22.762 **** 2025-09-20 10:55:51.016053 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016058 | orchestrator | 2025-09-20 10:55:51.016064 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-20 10:55:51.016069 | orchestrator | Saturday 20 September 2025 10:48:39 +0000 (0:00:00.197) 0:03:22.960 **** 2025-09-20 10:55:51.016074 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016080 | orchestrator | 2025-09-20 10:55:51.016085 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-20 10:55:51.016091 | orchestrator | Saturday 20 September 2025 10:48:39 +0000 (0:00:00.315) 0:03:23.275 **** 2025-09-20 10:55:51.016096 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016114 | orchestrator | 2025-09-20 10:55:51.016120 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-20 10:55:51.016125 | orchestrator | Saturday 20 September 2025 10:48:40 +0000 (0:00:00.255) 0:03:23.531 **** 2025-09-20 10:55:51.016131 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016136 | orchestrator | 2025-09-20 10:55:51.016141 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-20 10:55:51.016147 | orchestrator | Saturday 20 September 2025 10:48:40 +0000 (0:00:00.209) 0:03:23.740 **** 2025-09-20 10:55:51.016152 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016158 | orchestrator | 2025-09-20 10:55:51.016163 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-20 10:55:51.016168 | orchestrator | Saturday 20 September 2025 10:48:40 +0000 (0:00:00.225) 0:03:23.966 **** 2025-09-20 10:55:51.016174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.016179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.016185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.016190 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016195 | orchestrator | 2025-09-20 10:55:51.016201 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-20 10:55:51.016206 | orchestrator | Saturday 20 September 2025 10:48:41 +0000 (0:00:00.526) 0:03:24.492 **** 2025-09-20 10:55:51.016212 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016221 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.016230 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.016235 | orchestrator | 2025-09-20 10:55:51.016241 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-20 10:55:51.016246 | orchestrator | Saturday 20 September 2025 10:48:41 +0000 (0:00:00.476) 0:03:24.969 **** 2025-09-20 10:55:51.016252 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016257 | orchestrator | 2025-09-20 10:55:51.016263 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-20 10:55:51.016268 | orchestrator | Saturday 20 September 2025 10:48:41 +0000 (0:00:00.248) 0:03:25.217 **** 2025-09-20 10:55:51.016274 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016279 | orchestrator | 2025-09-20 10:55:51.016285 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-20 10:55:51.016290 | orchestrator | Saturday 20 September 2025 10:48:42 +0000 (0:00:00.238) 0:03:25.455 **** 2025-09-20 10:55:51.016300 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.016306 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.016311 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.016317 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.016322 | orchestrator | 2025-09-20 10:55:51.016328 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-20 10:55:51.016333 | orchestrator | Saturday 20 September 2025 10:48:43 +0000 (0:00:01.101) 0:03:26.557 **** 2025-09-20 10:55:51.016339 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.016344 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.016350 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.016355 | orchestrator | 2025-09-20 10:55:51.016361 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-20 10:55:51.016366 | orchestrator | Saturday 20 September 2025 10:48:43 +0000 (0:00:00.334) 0:03:26.891 **** 2025-09-20 10:55:51.016371 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.016377 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.016382 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.016388 | orchestrator | 2025-09-20 10:55:51.016393 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-20 10:55:51.016399 | orchestrator | Saturday 20 September 2025 10:48:44 +0000 (0:00:01.391) 0:03:28.283 **** 2025-09-20 10:55:51.016404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.016410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.016415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.016421 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016426 | orchestrator | 2025-09-20 10:55:51.016432 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-20 10:55:51.016437 | orchestrator | Saturday 20 September 2025 10:48:45 +0000 (0:00:00.709) 0:03:28.993 **** 2025-09-20 10:55:51.016442 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.016448 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.016453 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.016459 | orchestrator | 2025-09-20 10:55:51.016464 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-20 10:55:51.016470 | orchestrator | Saturday 20 September 2025 10:48:46 +0000 (0:00:00.451) 0:03:29.444 **** 2025-09-20 10:55:51.016475 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.016481 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.016486 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.016491 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.016497 | orchestrator | 2025-09-20 10:55:51.016502 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-20 10:55:51.016508 | orchestrator | Saturday 20 September 2025 10:48:47 +0000 (0:00:01.415) 0:03:30.860 **** 2025-09-20 10:55:51.016513 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.016519 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.016524 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.016530 | orchestrator | 2025-09-20 10:55:51.016535 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-20 10:55:51.016541 | orchestrator | Saturday 20 September 2025 10:48:47 +0000 (0:00:00.379) 0:03:31.240 **** 2025-09-20 10:55:51.016546 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.016552 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.016557 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.016562 | orchestrator | 2025-09-20 10:55:51.016568 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-20 10:55:51.016573 | orchestrator | Saturday 20 September 2025 10:48:49 +0000 (0:00:01.664) 0:03:32.904 **** 2025-09-20 10:55:51.016579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.016589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.016594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.016600 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016605 | orchestrator | 2025-09-20 10:55:51.016610 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-20 10:55:51.016616 | orchestrator | Saturday 20 September 2025 10:48:50 +0000 (0:00:00.579) 0:03:33.484 **** 2025-09-20 10:55:51.016621 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.016627 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.016632 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.016638 | orchestrator | 2025-09-20 10:55:51.016643 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-20 10:55:51.016649 | orchestrator | Saturday 20 September 2025 10:48:50 +0000 (0:00:00.367) 0:03:33.851 **** 2025-09-20 10:55:51.016654 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016659 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.016665 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.016670 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.016676 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.016681 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.016687 | orchestrator | 2025-09-20 10:55:51.016692 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-20 10:55:51.016704 | orchestrator | Saturday 20 September 2025 10:48:51 +0000 (0:00:01.030) 0:03:34.882 **** 2025-09-20 10:55:51.016710 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.016716 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.016721 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.016727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.016732 | orchestrator | 2025-09-20 10:55:51.016738 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-20 10:55:51.016743 | orchestrator | Saturday 20 September 2025 10:48:52 +0000 (0:00:01.324) 0:03:36.206 **** 2025-09-20 10:55:51.016748 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.016754 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.016759 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.016765 | orchestrator | 2025-09-20 10:55:51.016770 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-20 10:55:51.016776 | orchestrator | Saturday 20 September 2025 10:48:53 +0000 (0:00:00.423) 0:03:36.630 **** 2025-09-20 10:55:51.016781 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.016787 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.016792 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.016798 | orchestrator | 2025-09-20 10:55:51.016803 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-20 10:55:51.016808 | orchestrator | Saturday 20 September 2025 10:48:54 +0000 (0:00:01.619) 0:03:38.249 **** 2025-09-20 10:55:51.016814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 10:55:51.016819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 10:55:51.016825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 10:55:51.016830 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.016836 | orchestrator | 2025-09-20 10:55:51.016841 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-20 10:55:51.016847 | orchestrator | Saturday 20 September 2025 10:48:55 +0000 (0:00:00.615) 0:03:38.864 **** 2025-09-20 10:55:51.016852 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.016857 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.016863 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.016868 | orchestrator | 2025-09-20 10:55:51.016874 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-20 10:55:51.016879 | orchestrator | 2025-09-20 10:55:51.016885 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.016894 | orchestrator | Saturday 20 September 2025 10:48:56 +0000 (0:00:00.528) 0:03:39.393 **** 2025-09-20 10:55:51.016899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.016905 | orchestrator | 2025-09-20 10:55:51.016910 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.016916 | orchestrator | Saturday 20 September 2025 10:48:56 +0000 (0:00:00.679) 0:03:40.072 **** 2025-09-20 10:55:51.016921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.016926 | orchestrator | 2025-09-20 10:55:51.016932 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.016937 | orchestrator | Saturday 20 September 2025 10:48:57 +0000 (0:00:00.629) 0:03:40.702 **** 2025-09-20 10:55:51.016943 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.016948 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.016953 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.016959 | orchestrator | 2025-09-20 10:55:51.016964 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.016970 | orchestrator | Saturday 20 September 2025 10:48:58 +0000 (0:00:00.984) 0:03:41.687 **** 2025-09-20 10:55:51.016975 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.016981 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.016986 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.016992 | orchestrator | 2025-09-20 10:55:51.016997 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.017003 | orchestrator | Saturday 20 September 2025 10:48:59 +0000 (0:00:00.781) 0:03:42.468 **** 2025-09-20 10:55:51.017008 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017014 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017019 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017024 | orchestrator | 2025-09-20 10:55:51.017030 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.017035 | orchestrator | Saturday 20 September 2025 10:48:59 +0000 (0:00:00.459) 0:03:42.927 **** 2025-09-20 10:55:51.017041 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017046 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017052 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017057 | orchestrator | 2025-09-20 10:55:51.017062 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.017068 | orchestrator | Saturday 20 September 2025 10:48:59 +0000 (0:00:00.338) 0:03:43.265 **** 2025-09-20 10:55:51.017073 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017079 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017084 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017090 | orchestrator | 2025-09-20 10:55:51.017095 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.017113 | orchestrator | Saturday 20 September 2025 10:49:00 +0000 (0:00:00.704) 0:03:43.969 **** 2025-09-20 10:55:51.017119 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017125 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017130 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017136 | orchestrator | 2025-09-20 10:55:51.017141 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.017147 | orchestrator | Saturday 20 September 2025 10:49:00 +0000 (0:00:00.228) 0:03:44.198 **** 2025-09-20 10:55:51.017152 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017158 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017163 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017169 | orchestrator | 2025-09-20 10:55:51.017183 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.017189 | orchestrator | Saturday 20 September 2025 10:49:01 +0000 (0:00:00.419) 0:03:44.617 **** 2025-09-20 10:55:51.017199 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017204 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017210 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017215 | orchestrator | 2025-09-20 10:55:51.017221 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.017226 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:00.699) 0:03:45.316 **** 2025-09-20 10:55:51.017232 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017237 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017243 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017248 | orchestrator | 2025-09-20 10:55:51.017254 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.017259 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:00.734) 0:03:46.051 **** 2025-09-20 10:55:51.017265 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017270 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017276 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017281 | orchestrator | 2025-09-20 10:55:51.017287 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.017292 | orchestrator | Saturday 20 September 2025 10:49:02 +0000 (0:00:00.261) 0:03:46.312 **** 2025-09-20 10:55:51.017298 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017303 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017308 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017314 | orchestrator | 2025-09-20 10:55:51.017319 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.017325 | orchestrator | Saturday 20 September 2025 10:49:03 +0000 (0:00:00.538) 0:03:46.851 **** 2025-09-20 10:55:51.017330 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017336 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017341 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017347 | orchestrator | 2025-09-20 10:55:51.017352 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.017358 | orchestrator | Saturday 20 September 2025 10:49:03 +0000 (0:00:00.273) 0:03:47.124 **** 2025-09-20 10:55:51.017363 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017369 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017374 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017379 | orchestrator | 2025-09-20 10:55:51.017385 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.017390 | orchestrator | Saturday 20 September 2025 10:49:04 +0000 (0:00:00.329) 0:03:47.454 **** 2025-09-20 10:55:51.017396 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017401 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017407 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017412 | orchestrator | 2025-09-20 10:55:51.017417 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.017423 | orchestrator | Saturday 20 September 2025 10:49:04 +0000 (0:00:00.277) 0:03:47.732 **** 2025-09-20 10:55:51.017428 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017434 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017439 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017445 | orchestrator | 2025-09-20 10:55:51.017450 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.017456 | orchestrator | Saturday 20 September 2025 10:49:04 +0000 (0:00:00.482) 0:03:48.214 **** 2025-09-20 10:55:51.017461 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017466 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.017472 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.017477 | orchestrator | 2025-09-20 10:55:51.017483 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.017488 | orchestrator | Saturday 20 September 2025 10:49:05 +0000 (0:00:00.274) 0:03:48.488 **** 2025-09-20 10:55:51.017494 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017499 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017508 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017513 | orchestrator | 2025-09-20 10:55:51.017519 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.017524 | orchestrator | Saturday 20 September 2025 10:49:05 +0000 (0:00:00.394) 0:03:48.883 **** 2025-09-20 10:55:51.017530 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017535 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017541 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017546 | orchestrator | 2025-09-20 10:55:51.017552 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.017557 | orchestrator | Saturday 20 September 2025 10:49:05 +0000 (0:00:00.317) 0:03:49.200 **** 2025-09-20 10:55:51.017562 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017568 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017573 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017579 | orchestrator | 2025-09-20 10:55:51.017584 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-20 10:55:51.017590 | orchestrator | Saturday 20 September 2025 10:49:06 +0000 (0:00:00.836) 0:03:50.037 **** 2025-09-20 10:55:51.017595 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017600 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017606 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017611 | orchestrator | 2025-09-20 10:55:51.017617 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-20 10:55:51.017622 | orchestrator | Saturday 20 September 2025 10:49:07 +0000 (0:00:00.327) 0:03:50.364 **** 2025-09-20 10:55:51.017628 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-20 10:55:51.017633 | orchestrator | 2025-09-20 10:55:51.017638 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-20 10:55:51.017644 | orchestrator | Saturday 20 September 2025 10:49:07 +0000 (0:00:00.703) 0:03:51.067 **** 2025-09-20 10:55:51.017649 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.017655 | orchestrator | 2025-09-20 10:55:51.017660 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-20 10:55:51.017669 | orchestrator | Saturday 20 September 2025 10:49:07 +0000 (0:00:00.112) 0:03:51.180 **** 2025-09-20 10:55:51.017677 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-20 10:55:51.017683 | orchestrator | 2025-09-20 10:55:51.017688 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-20 10:55:51.017694 | orchestrator | Saturday 20 September 2025 10:49:08 +0000 (0:00:00.952) 0:03:52.133 **** 2025-09-20 10:55:51.017699 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017705 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017710 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017716 | orchestrator | 2025-09-20 10:55:51.017721 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-20 10:55:51.017727 | orchestrator | Saturday 20 September 2025 10:49:09 +0000 (0:00:00.359) 0:03:52.492 **** 2025-09-20 10:55:51.017732 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017738 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017743 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017748 | orchestrator | 2025-09-20 10:55:51.017754 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-20 10:55:51.017759 | orchestrator | Saturday 20 September 2025 10:49:09 +0000 (0:00:00.309) 0:03:52.802 **** 2025-09-20 10:55:51.017765 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.017770 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.017776 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.017781 | orchestrator | 2025-09-20 10:55:51.017787 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-20 10:55:51.017792 | orchestrator | Saturday 20 September 2025 10:49:10 +0000 (0:00:01.210) 0:03:54.012 **** 2025-09-20 10:55:51.017798 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.017803 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.017813 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.017819 | orchestrator | 2025-09-20 10:55:51.017824 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-20 10:55:51.017830 | orchestrator | Saturday 20 September 2025 10:49:11 +0000 (0:00:01.088) 0:03:55.101 **** 2025-09-20 10:55:51.017835 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.017840 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.017846 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.017851 | orchestrator | 2025-09-20 10:55:51.017857 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-20 10:55:51.017862 | orchestrator | Saturday 20 September 2025 10:49:12 +0000 (0:00:00.681) 0:03:55.783 **** 2025-09-20 10:55:51.017868 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017873 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.017879 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.017884 | orchestrator | 2025-09-20 10:55:51.017890 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-20 10:55:51.017895 | orchestrator | Saturday 20 September 2025 10:49:13 +0000 (0:00:00.671) 0:03:56.454 **** 2025-09-20 10:55:51.017900 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.017906 | orchestrator | 2025-09-20 10:55:51.017911 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-20 10:55:51.017917 | orchestrator | Saturday 20 September 2025 10:49:14 +0000 (0:00:01.248) 0:03:57.702 **** 2025-09-20 10:55:51.017923 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.017928 | orchestrator | 2025-09-20 10:55:51.017934 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-20 10:55:51.017939 | orchestrator | Saturday 20 September 2025 10:49:15 +0000 (0:00:00.622) 0:03:58.325 **** 2025-09-20 10:55:51.017945 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:55:51.017950 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.017956 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.017961 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:55:51.017967 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-20 10:55:51.017972 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:55:51.017978 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:55:51.017983 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-20 10:55:51.017989 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:55:51.017994 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-20 10:55:51.018000 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-20 10:55:51.018005 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-20 10:55:51.018010 | orchestrator | 2025-09-20 10:55:51.018123 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-20 10:55:51.018133 | orchestrator | Saturday 20 September 2025 10:49:18 +0000 (0:00:03.106) 0:04:01.431 **** 2025-09-20 10:55:51.018142 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018151 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018159 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018167 | orchestrator | 2025-09-20 10:55:51.018174 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-20 10:55:51.018182 | orchestrator | Saturday 20 September 2025 10:49:19 +0000 (0:00:01.395) 0:04:02.827 **** 2025-09-20 10:55:51.018191 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.018204 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.018213 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.018222 | orchestrator | 2025-09-20 10:55:51.018230 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-20 10:55:51.018238 | orchestrator | Saturday 20 September 2025 10:49:19 +0000 (0:00:00.308) 0:04:03.135 **** 2025-09-20 10:55:51.018255 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.018264 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.018272 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.018280 | orchestrator | 2025-09-20 10:55:51.018289 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-20 10:55:51.018297 | orchestrator | Saturday 20 September 2025 10:49:20 +0000 (0:00:00.338) 0:04:03.474 **** 2025-09-20 10:55:51.018305 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018313 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018323 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018331 | orchestrator | 2025-09-20 10:55:51.018382 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-20 10:55:51.018394 | orchestrator | Saturday 20 September 2025 10:49:21 +0000 (0:00:01.779) 0:04:05.253 **** 2025-09-20 10:55:51.018402 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018411 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018419 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018427 | orchestrator | 2025-09-20 10:55:51.018435 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-20 10:55:51.018444 | orchestrator | Saturday 20 September 2025 10:49:23 +0000 (0:00:01.554) 0:04:06.807 **** 2025-09-20 10:55:51.018453 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.018460 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.018468 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.018476 | orchestrator | 2025-09-20 10:55:51.018485 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-20 10:55:51.018494 | orchestrator | Saturday 20 September 2025 10:49:23 +0000 (0:00:00.315) 0:04:07.123 **** 2025-09-20 10:55:51.018503 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-20 10:55:51.018512 | orchestrator | 2025-09-20 10:55:51.018521 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-20 10:55:51.018529 | orchestrator | Saturday 20 September 2025 10:49:24 +0000 (0:00:00.562) 0:04:07.685 **** 2025-09-20 10:55:51.018538 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.018546 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.018556 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.018561 | orchestrator | 2025-09-20 10:55:51.018567 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-20 10:55:51.018573 | orchestrator | Saturday 20 September 2025 10:49:24 +0000 (0:00:00.462) 0:04:08.148 **** 2025-09-20 10:55:51.018578 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.018584 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.018589 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.018594 | orchestrator | 2025-09-20 10:55:51.018600 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-20 10:55:51.018605 | orchestrator | Saturday 20 September 2025 10:49:25 +0000 (0:00:00.295) 0:04:08.443 **** 2025-09-20 10:55:51.018611 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.018617 | orchestrator | 2025-09-20 10:55:51.018622 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-20 10:55:51.018627 | orchestrator | Saturday 20 September 2025 10:49:25 +0000 (0:00:00.517) 0:04:08.961 **** 2025-09-20 10:55:51.018633 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018638 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018643 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018649 | orchestrator | 2025-09-20 10:55:51.018654 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-20 10:55:51.018660 | orchestrator | Saturday 20 September 2025 10:49:27 +0000 (0:00:02.351) 0:04:11.313 **** 2025-09-20 10:55:51.018665 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018671 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018676 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018688 | orchestrator | 2025-09-20 10:55:51.018694 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-20 10:55:51.018699 | orchestrator | Saturday 20 September 2025 10:49:29 +0000 (0:00:01.265) 0:04:12.578 **** 2025-09-20 10:55:51.018705 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018710 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018716 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018721 | orchestrator | 2025-09-20 10:55:51.018726 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-20 10:55:51.018732 | orchestrator | Saturday 20 September 2025 10:49:30 +0000 (0:00:01.528) 0:04:14.106 **** 2025-09-20 10:55:51.018737 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.018743 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.018748 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.018753 | orchestrator | 2025-09-20 10:55:51.018759 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-20 10:55:51.018764 | orchestrator | Saturday 20 September 2025 10:49:32 +0000 (0:00:01.878) 0:04:15.985 **** 2025-09-20 10:55:51.018770 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.018775 | orchestrator | 2025-09-20 10:55:51.018781 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-20 10:55:51.018786 | orchestrator | Saturday 20 September 2025 10:49:33 +0000 (0:00:00.629) 0:04:16.614 **** 2025-09-20 10:55:51.018792 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-20 10:55:51.018797 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.018803 | orchestrator | 2025-09-20 10:55:51.018809 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-20 10:55:51.018814 | orchestrator | Saturday 20 September 2025 10:49:55 +0000 (0:00:21.786) 0:04:38.401 **** 2025-09-20 10:55:51.018819 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.018825 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.018830 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.018836 | orchestrator | 2025-09-20 10:55:51.018841 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-20 10:55:51.018847 | orchestrator | Saturday 20 September 2025 10:50:03 +0000 (0:00:08.635) 0:04:47.036 **** 2025-09-20 10:55:51.018852 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.018858 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.018863 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.018868 | orchestrator | 2025-09-20 10:55:51.018874 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-20 10:55:51.018880 | orchestrator | Saturday 20 September 2025 10:50:04 +0000 (0:00:00.309) 0:04:47.346 **** 2025-09-20 10:55:51.018917 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-20 10:55:51.018926 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-20 10:55:51.018932 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-20 10:55:51.018944 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-20 10:55:51.018950 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-20 10:55:51.018956 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__151abc1ec431de6d7a305ea667fae06dc6ad3c64'}])  2025-09-20 10:55:51.018962 | orchestrator | 2025-09-20 10:55:51.018968 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 10:55:51.018974 | orchestrator | Saturday 20 September 2025 10:50:16 +0000 (0:00:12.795) 0:05:00.141 **** 2025-09-20 10:55:51.018979 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.018985 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.018990 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.018996 | orchestrator | 2025-09-20 10:55:51.019001 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-20 10:55:51.019007 | orchestrator | Saturday 20 September 2025 10:50:17 +0000 (0:00:00.314) 0:05:00.456 **** 2025-09-20 10:55:51.019012 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.019018 | orchestrator | 2025-09-20 10:55:51.019023 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-20 10:55:51.019029 | orchestrator | Saturday 20 September 2025 10:50:17 +0000 (0:00:00.589) 0:05:01.045 **** 2025-09-20 10:55:51.019034 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019040 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019045 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019050 | orchestrator | 2025-09-20 10:55:51.019056 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-20 10:55:51.019061 | orchestrator | Saturday 20 September 2025 10:50:18 +0000 (0:00:00.290) 0:05:01.336 **** 2025-09-20 10:55:51.019067 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019072 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019078 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019083 | orchestrator | 2025-09-20 10:55:51.019089 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-20 10:55:51.019094 | orchestrator | Saturday 20 September 2025 10:50:18 +0000 (0:00:00.316) 0:05:01.652 **** 2025-09-20 10:55:51.019143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 10:55:51.019149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 10:55:51.019155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 10:55:51.019161 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019166 | orchestrator | 2025-09-20 10:55:51.019172 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-20 10:55:51.019177 | orchestrator | Saturday 20 September 2025 10:50:19 +0000 (0:00:00.673) 0:05:02.326 **** 2025-09-20 10:55:51.019183 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019188 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019193 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019204 | orchestrator | 2025-09-20 10:55:51.019228 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-20 10:55:51.019234 | orchestrator | 2025-09-20 10:55:51.019243 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.019249 | orchestrator | Saturday 20 September 2025 10:50:19 +0000 (0:00:00.636) 0:05:02.963 **** 2025-09-20 10:55:51.019255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.019261 | orchestrator | 2025-09-20 10:55:51.019266 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.019271 | orchestrator | Saturday 20 September 2025 10:50:20 +0000 (0:00:00.460) 0:05:03.423 **** 2025-09-20 10:55:51.019276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-20 10:55:51.019281 | orchestrator | 2025-09-20 10:55:51.019285 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.019290 | orchestrator | Saturday 20 September 2025 10:50:20 +0000 (0:00:00.672) 0:05:04.095 **** 2025-09-20 10:55:51.019295 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019300 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019305 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019310 | orchestrator | 2025-09-20 10:55:51.019314 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.019319 | orchestrator | Saturday 20 September 2025 10:50:21 +0000 (0:00:00.690) 0:05:04.786 **** 2025-09-20 10:55:51.019324 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019329 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019334 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019338 | orchestrator | 2025-09-20 10:55:51.019343 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.019348 | orchestrator | Saturday 20 September 2025 10:50:21 +0000 (0:00:00.328) 0:05:05.114 **** 2025-09-20 10:55:51.019353 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019358 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019363 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019367 | orchestrator | 2025-09-20 10:55:51.019372 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.019377 | orchestrator | Saturday 20 September 2025 10:50:22 +0000 (0:00:00.316) 0:05:05.430 **** 2025-09-20 10:55:51.019382 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019387 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019392 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019396 | orchestrator | 2025-09-20 10:55:51.019401 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.019406 | orchestrator | Saturday 20 September 2025 10:50:22 +0000 (0:00:00.269) 0:05:05.699 **** 2025-09-20 10:55:51.019411 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019416 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019420 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019425 | orchestrator | 2025-09-20 10:55:51.019430 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.019435 | orchestrator | Saturday 20 September 2025 10:50:23 +0000 (0:00:00.898) 0:05:06.598 **** 2025-09-20 10:55:51.019440 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019445 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019450 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019454 | orchestrator | 2025-09-20 10:55:51.019459 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.019464 | orchestrator | Saturday 20 September 2025 10:50:23 +0000 (0:00:00.267) 0:05:06.865 **** 2025-09-20 10:55:51.019469 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019474 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019478 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019488 | orchestrator | 2025-09-20 10:55:51.019493 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.019498 | orchestrator | Saturday 20 September 2025 10:50:23 +0000 (0:00:00.295) 0:05:07.160 **** 2025-09-20 10:55:51.019503 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019507 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019512 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019517 | orchestrator | 2025-09-20 10:55:51.019522 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.019527 | orchestrator | Saturday 20 September 2025 10:50:24 +0000 (0:00:00.681) 0:05:07.842 **** 2025-09-20 10:55:51.019531 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019536 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019541 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019546 | orchestrator | 2025-09-20 10:55:51.019550 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.019555 | orchestrator | Saturday 20 September 2025 10:50:25 +0000 (0:00:00.838) 0:05:08.681 **** 2025-09-20 10:55:51.019560 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019565 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019570 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019574 | orchestrator | 2025-09-20 10:55:51.019579 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.019584 | orchestrator | Saturday 20 September 2025 10:50:25 +0000 (0:00:00.286) 0:05:08.967 **** 2025-09-20 10:55:51.019589 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019593 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019598 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019603 | orchestrator | 2025-09-20 10:55:51.019608 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.019613 | orchestrator | Saturday 20 September 2025 10:50:25 +0000 (0:00:00.300) 0:05:09.268 **** 2025-09-20 10:55:51.019618 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019622 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019627 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019632 | orchestrator | 2025-09-20 10:55:51.019637 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.019642 | orchestrator | Saturday 20 September 2025 10:50:26 +0000 (0:00:00.252) 0:05:09.521 **** 2025-09-20 10:55:51.019646 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019651 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019672 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019678 | orchestrator | 2025-09-20 10:55:51.019683 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.019688 | orchestrator | Saturday 20 September 2025 10:50:26 +0000 (0:00:00.415) 0:05:09.936 **** 2025-09-20 10:55:51.019693 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019698 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019703 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019707 | orchestrator | 2025-09-20 10:55:51.019712 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.019717 | orchestrator | Saturday 20 September 2025 10:50:26 +0000 (0:00:00.303) 0:05:10.240 **** 2025-09-20 10:55:51.019722 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019727 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019732 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019737 | orchestrator | 2025-09-20 10:55:51.019741 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.019746 | orchestrator | Saturday 20 September 2025 10:50:27 +0000 (0:00:00.292) 0:05:10.532 **** 2025-09-20 10:55:51.019751 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019756 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019761 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019765 | orchestrator | 2025-09-20 10:55:51.019770 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.019778 | orchestrator | Saturday 20 September 2025 10:50:27 +0000 (0:00:00.286) 0:05:10.819 **** 2025-09-20 10:55:51.019783 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019788 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019793 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019798 | orchestrator | 2025-09-20 10:55:51.019803 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.019807 | orchestrator | Saturday 20 September 2025 10:50:27 +0000 (0:00:00.464) 0:05:11.283 **** 2025-09-20 10:55:51.019812 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019817 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019822 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019827 | orchestrator | 2025-09-20 10:55:51.019831 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.019836 | orchestrator | Saturday 20 September 2025 10:50:28 +0000 (0:00:00.330) 0:05:11.614 **** 2025-09-20 10:55:51.019841 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.019846 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.019851 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.019855 | orchestrator | 2025-09-20 10:55:51.019860 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-20 10:55:51.019865 | orchestrator | Saturday 20 September 2025 10:50:28 +0000 (0:00:00.510) 0:05:12.124 **** 2025-09-20 10:55:51.019870 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 10:55:51.019875 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:55:51.019880 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:55:51.019885 | orchestrator | 2025-09-20 10:55:51.019890 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-20 10:55:51.019894 | orchestrator | Saturday 20 September 2025 10:50:29 +0000 (0:00:00.740) 0:05:12.865 **** 2025-09-20 10:55:51.019899 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.019904 | orchestrator | 2025-09-20 10:55:51.019909 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-20 10:55:51.019914 | orchestrator | Saturday 20 September 2025 10:50:30 +0000 (0:00:00.619) 0:05:13.484 **** 2025-09-20 10:55:51.019919 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.019923 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.019928 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.019933 | orchestrator | 2025-09-20 10:55:51.019938 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-20 10:55:51.019943 | orchestrator | Saturday 20 September 2025 10:50:30 +0000 (0:00:00.654) 0:05:14.139 **** 2025-09-20 10:55:51.019947 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.019952 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.019957 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.019962 | orchestrator | 2025-09-20 10:55:51.019967 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-20 10:55:51.019972 | orchestrator | Saturday 20 September 2025 10:50:31 +0000 (0:00:00.327) 0:05:14.466 **** 2025-09-20 10:55:51.019976 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:55:51.019981 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:55:51.019986 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:55:51.019991 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-20 10:55:51.019996 | orchestrator | 2025-09-20 10:55:51.020001 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-20 10:55:51.020006 | orchestrator | Saturday 20 September 2025 10:50:40 +0000 (0:00:09.690) 0:05:24.157 **** 2025-09-20 10:55:51.020010 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.020015 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.020020 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.020031 | orchestrator | 2025-09-20 10:55:51.020036 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-20 10:55:51.020041 | orchestrator | Saturday 20 September 2025 10:50:41 +0000 (0:00:00.574) 0:05:24.731 **** 2025-09-20 10:55:51.020045 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-20 10:55:51.020050 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 10:55:51.020055 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 10:55:51.020060 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-20 10:55:51.020065 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.020070 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.020075 | orchestrator | 2025-09-20 10:55:51.020095 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-20 10:55:51.020112 | orchestrator | Saturday 20 September 2025 10:50:43 +0000 (0:00:02.135) 0:05:26.867 **** 2025-09-20 10:55:51.020117 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-20 10:55:51.020122 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 10:55:51.020127 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 10:55:51.020132 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:55:51.020137 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-20 10:55:51.020142 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-20 10:55:51.020146 | orchestrator | 2025-09-20 10:55:51.020151 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-20 10:55:51.020156 | orchestrator | Saturday 20 September 2025 10:50:44 +0000 (0:00:01.174) 0:05:28.041 **** 2025-09-20 10:55:51.020161 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.020166 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.020171 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.020175 | orchestrator | 2025-09-20 10:55:51.020180 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-20 10:55:51.020185 | orchestrator | Saturday 20 September 2025 10:50:45 +0000 (0:00:00.714) 0:05:28.756 **** 2025-09-20 10:55:51.020190 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.020195 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.020200 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.020204 | orchestrator | 2025-09-20 10:55:51.020209 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-20 10:55:51.020214 | orchestrator | Saturday 20 September 2025 10:50:46 +0000 (0:00:00.567) 0:05:29.323 **** 2025-09-20 10:55:51.020219 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.020224 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.020228 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.020233 | orchestrator | 2025-09-20 10:55:51.020238 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-20 10:55:51.020243 | orchestrator | Saturday 20 September 2025 10:50:46 +0000 (0:00:00.307) 0:05:29.631 **** 2025-09-20 10:55:51.020248 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.020253 | orchestrator | 2025-09-20 10:55:51.020257 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-20 10:55:51.020262 | orchestrator | Saturday 20 September 2025 10:50:46 +0000 (0:00:00.549) 0:05:30.180 **** 2025-09-20 10:55:51.020267 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.020272 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.020276 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.020281 | orchestrator | 2025-09-20 10:55:51.020286 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-20 10:55:51.020291 | orchestrator | Saturday 20 September 2025 10:50:47 +0000 (0:00:00.334) 0:05:30.515 **** 2025-09-20 10:55:51.020296 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.020300 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.020309 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.020314 | orchestrator | 2025-09-20 10:55:51.020319 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-20 10:55:51.020324 | orchestrator | Saturday 20 September 2025 10:50:47 +0000 (0:00:00.570) 0:05:31.086 **** 2025-09-20 10:55:51.020329 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.020334 | orchestrator | 2025-09-20 10:55:51.020338 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-20 10:55:51.020343 | orchestrator | Saturday 20 September 2025 10:50:48 +0000 (0:00:00.514) 0:05:31.600 **** 2025-09-20 10:55:51.020348 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.020353 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.020358 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.020362 | orchestrator | 2025-09-20 10:55:51.020367 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-20 10:55:51.020372 | orchestrator | Saturday 20 September 2025 10:50:49 +0000 (0:00:01.271) 0:05:32.872 **** 2025-09-20 10:55:51.020377 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.020381 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.020386 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.020391 | orchestrator | 2025-09-20 10:55:51.020396 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-20 10:55:51.020401 | orchestrator | Saturday 20 September 2025 10:50:50 +0000 (0:00:01.418) 0:05:34.291 **** 2025-09-20 10:55:51.020406 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.020410 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.020415 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.020420 | orchestrator | 2025-09-20 10:55:51.020425 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-20 10:55:51.020430 | orchestrator | Saturday 20 September 2025 10:50:52 +0000 (0:00:01.652) 0:05:35.944 **** 2025-09-20 10:55:51.020434 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.020439 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.020444 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.020449 | orchestrator | 2025-09-20 10:55:51.020453 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-20 10:55:51.020458 | orchestrator | Saturday 20 September 2025 10:50:54 +0000 (0:00:01.924) 0:05:37.869 **** 2025-09-20 10:55:51.020463 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.020468 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.020473 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-20 10:55:51.020478 | orchestrator | 2025-09-20 10:55:51.020482 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-20 10:55:51.020487 | orchestrator | Saturday 20 September 2025 10:50:54 +0000 (0:00:00.400) 0:05:38.269 **** 2025-09-20 10:55:51.020492 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-20 10:55:51.020514 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-20 10:55:51.020520 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-20 10:55:51.020525 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-20 10:55:51.020530 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.020534 | orchestrator | 2025-09-20 10:55:51.020539 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-20 10:55:51.020544 | orchestrator | Saturday 20 September 2025 10:51:19 +0000 (0:00:24.515) 0:06:02.785 **** 2025-09-20 10:55:51.020549 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.020554 | orchestrator | 2025-09-20 10:55:51.020559 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-20 10:55:51.020567 | orchestrator | Saturday 20 September 2025 10:51:20 +0000 (0:00:01.243) 0:06:04.029 **** 2025-09-20 10:55:51.020572 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.020576 | orchestrator | 2025-09-20 10:55:51.020581 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-20 10:55:51.020586 | orchestrator | Saturday 20 September 2025 10:51:21 +0000 (0:00:00.318) 0:06:04.348 **** 2025-09-20 10:55:51.020591 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.020596 | orchestrator | 2025-09-20 10:55:51.020601 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-20 10:55:51.020605 | orchestrator | Saturday 20 September 2025 10:51:21 +0000 (0:00:00.152) 0:06:04.501 **** 2025-09-20 10:55:51.020610 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-20 10:55:51.020615 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-20 10:55:51.020620 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-20 10:55:51.020625 | orchestrator | 2025-09-20 10:55:51.020630 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-20 10:55:51.020634 | orchestrator | Saturday 20 September 2025 10:51:27 +0000 (0:00:06.266) 0:06:10.767 **** 2025-09-20 10:55:51.020639 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-20 10:55:51.020644 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-20 10:55:51.020649 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-20 10:55:51.020654 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-20 10:55:51.020658 | orchestrator | 2025-09-20 10:55:51.020663 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 10:55:51.020668 | orchestrator | Saturday 20 September 2025 10:51:31 +0000 (0:00:04.523) 0:06:15.291 **** 2025-09-20 10:55:51.020673 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.020678 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.020683 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.020688 | orchestrator | 2025-09-20 10:55:51.020692 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-20 10:55:51.020697 | orchestrator | Saturday 20 September 2025 10:51:32 +0000 (0:00:00.797) 0:06:16.088 **** 2025-09-20 10:55:51.020702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.020707 | orchestrator | 2025-09-20 10:55:51.020712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-20 10:55:51.020717 | orchestrator | Saturday 20 September 2025 10:51:33 +0000 (0:00:00.534) 0:06:16.623 **** 2025-09-20 10:55:51.020722 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.020726 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.020731 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.020736 | orchestrator | 2025-09-20 10:55:51.020741 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-20 10:55:51.020746 | orchestrator | Saturday 20 September 2025 10:51:33 +0000 (0:00:00.281) 0:06:16.904 **** 2025-09-20 10:55:51.020751 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.020755 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.020760 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.020765 | orchestrator | 2025-09-20 10:55:51.020770 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-20 10:55:51.020775 | orchestrator | Saturday 20 September 2025 10:51:34 +0000 (0:00:01.250) 0:06:18.155 **** 2025-09-20 10:55:51.020779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 10:55:51.020784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 10:55:51.020789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 10:55:51.020794 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.020802 | orchestrator | 2025-09-20 10:55:51.020807 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-20 10:55:51.020812 | orchestrator | Saturday 20 September 2025 10:51:35 +0000 (0:00:00.544) 0:06:18.699 **** 2025-09-20 10:55:51.020816 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.020821 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.020826 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.020831 | orchestrator | 2025-09-20 10:55:51.020836 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-20 10:55:51.020841 | orchestrator | 2025-09-20 10:55:51.020845 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.020850 | orchestrator | Saturday 20 September 2025 10:51:35 +0000 (0:00:00.532) 0:06:19.232 **** 2025-09-20 10:55:51.020855 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.020860 | orchestrator | 2025-09-20 10:55:51.020865 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.020886 | orchestrator | Saturday 20 September 2025 10:51:36 +0000 (0:00:00.649) 0:06:19.881 **** 2025-09-20 10:55:51.020892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.020897 | orchestrator | 2025-09-20 10:55:51.020902 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.020907 | orchestrator | Saturday 20 September 2025 10:51:37 +0000 (0:00:00.481) 0:06:20.362 **** 2025-09-20 10:55:51.020912 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.020917 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.020922 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.020927 | orchestrator | 2025-09-20 10:55:51.020931 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.020936 | orchestrator | Saturday 20 September 2025 10:51:37 +0000 (0:00:00.269) 0:06:20.632 **** 2025-09-20 10:55:51.020941 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.020946 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.020951 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.020955 | orchestrator | 2025-09-20 10:55:51.020960 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.020965 | orchestrator | Saturday 20 September 2025 10:51:38 +0000 (0:00:00.797) 0:06:21.430 **** 2025-09-20 10:55:51.020970 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.020975 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.020979 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.020984 | orchestrator | 2025-09-20 10:55:51.020989 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.020994 | orchestrator | Saturday 20 September 2025 10:51:38 +0000 (0:00:00.656) 0:06:22.086 **** 2025-09-20 10:55:51.020998 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021003 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021008 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021013 | orchestrator | 2025-09-20 10:55:51.021018 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.021022 | orchestrator | Saturday 20 September 2025 10:51:39 +0000 (0:00:00.600) 0:06:22.687 **** 2025-09-20 10:55:51.021027 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021032 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021037 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021041 | orchestrator | 2025-09-20 10:55:51.021046 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.021051 | orchestrator | Saturday 20 September 2025 10:51:39 +0000 (0:00:00.276) 0:06:22.963 **** 2025-09-20 10:55:51.021056 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021061 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021065 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021074 | orchestrator | 2025-09-20 10:55:51.021079 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.021084 | orchestrator | Saturday 20 September 2025 10:51:40 +0000 (0:00:00.515) 0:06:23.478 **** 2025-09-20 10:55:51.021088 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021093 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021098 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021114 | orchestrator | 2025-09-20 10:55:51.021119 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.021123 | orchestrator | Saturday 20 September 2025 10:51:40 +0000 (0:00:00.273) 0:06:23.752 **** 2025-09-20 10:55:51.021128 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021133 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021138 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021143 | orchestrator | 2025-09-20 10:55:51.021147 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.021152 | orchestrator | Saturday 20 September 2025 10:51:41 +0000 (0:00:00.672) 0:06:24.425 **** 2025-09-20 10:55:51.021157 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021162 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021167 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021171 | orchestrator | 2025-09-20 10:55:51.021176 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.021181 | orchestrator | Saturday 20 September 2025 10:51:41 +0000 (0:00:00.662) 0:06:25.087 **** 2025-09-20 10:55:51.021186 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021191 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021196 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021200 | orchestrator | 2025-09-20 10:55:51.021205 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.021210 | orchestrator | Saturday 20 September 2025 10:51:42 +0000 (0:00:00.587) 0:06:25.674 **** 2025-09-20 10:55:51.021215 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021220 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021225 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021229 | orchestrator | 2025-09-20 10:55:51.021234 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.021239 | orchestrator | Saturday 20 September 2025 10:51:42 +0000 (0:00:00.299) 0:06:25.974 **** 2025-09-20 10:55:51.021244 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021249 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021254 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021258 | orchestrator | 2025-09-20 10:55:51.021263 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.021268 | orchestrator | Saturday 20 September 2025 10:51:42 +0000 (0:00:00.320) 0:06:26.294 **** 2025-09-20 10:55:51.021273 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021278 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021282 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021287 | orchestrator | 2025-09-20 10:55:51.021292 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.021297 | orchestrator | Saturday 20 September 2025 10:51:43 +0000 (0:00:00.351) 0:06:26.646 **** 2025-09-20 10:55:51.021302 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021306 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021311 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021316 | orchestrator | 2025-09-20 10:55:51.021321 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.021326 | orchestrator | Saturday 20 September 2025 10:51:43 +0000 (0:00:00.649) 0:06:27.296 **** 2025-09-20 10:55:51.021333 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021338 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021343 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021348 | orchestrator | 2025-09-20 10:55:51.021352 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.021361 | orchestrator | Saturday 20 September 2025 10:51:44 +0000 (0:00:00.313) 0:06:27.609 **** 2025-09-20 10:55:51.021366 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021370 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021375 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021380 | orchestrator | 2025-09-20 10:55:51.021385 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.021390 | orchestrator | Saturday 20 September 2025 10:51:44 +0000 (0:00:00.300) 0:06:27.910 **** 2025-09-20 10:55:51.021395 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021399 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021404 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021409 | orchestrator | 2025-09-20 10:55:51.021414 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.021438 | orchestrator | Saturday 20 September 2025 10:51:44 +0000 (0:00:00.316) 0:06:28.227 **** 2025-09-20 10:55:51.021443 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021448 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021452 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021457 | orchestrator | 2025-09-20 10:55:51.021462 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.021467 | orchestrator | Saturday 20 September 2025 10:51:45 +0000 (0:00:00.663) 0:06:28.890 **** 2025-09-20 10:55:51.021472 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021477 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021481 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021486 | orchestrator | 2025-09-20 10:55:51.021491 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-20 10:55:51.021496 | orchestrator | Saturday 20 September 2025 10:51:46 +0000 (0:00:00.558) 0:06:29.449 **** 2025-09-20 10:55:51.021500 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021505 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021510 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021515 | orchestrator | 2025-09-20 10:55:51.021519 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-20 10:55:51.021524 | orchestrator | Saturday 20 September 2025 10:51:46 +0000 (0:00:00.327) 0:06:29.777 **** 2025-09-20 10:55:51.021529 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:55:51.021534 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:55:51.021539 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:55:51.021543 | orchestrator | 2025-09-20 10:55:51.021548 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-20 10:55:51.021553 | orchestrator | Saturday 20 September 2025 10:51:47 +0000 (0:00:00.920) 0:06:30.697 **** 2025-09-20 10:55:51.021558 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.021563 | orchestrator | 2025-09-20 10:55:51.021567 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-20 10:55:51.021572 | orchestrator | Saturday 20 September 2025 10:51:48 +0000 (0:00:00.751) 0:06:31.449 **** 2025-09-20 10:55:51.021577 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021582 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021587 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021592 | orchestrator | 2025-09-20 10:55:51.021596 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-20 10:55:51.021601 | orchestrator | Saturday 20 September 2025 10:51:48 +0000 (0:00:00.328) 0:06:31.777 **** 2025-09-20 10:55:51.021606 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021611 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021615 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021620 | orchestrator | 2025-09-20 10:55:51.021625 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-20 10:55:51.021633 | orchestrator | Saturday 20 September 2025 10:51:48 +0000 (0:00:00.327) 0:06:32.105 **** 2025-09-20 10:55:51.021638 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021643 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021648 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021653 | orchestrator | 2025-09-20 10:55:51.021658 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-20 10:55:51.021662 | orchestrator | Saturday 20 September 2025 10:51:49 +0000 (0:00:00.873) 0:06:32.979 **** 2025-09-20 10:55:51.021667 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.021672 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.021677 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.021681 | orchestrator | 2025-09-20 10:55:51.021686 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-20 10:55:51.021691 | orchestrator | Saturday 20 September 2025 10:51:50 +0000 (0:00:00.349) 0:06:33.328 **** 2025-09-20 10:55:51.021696 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-20 10:55:51.021701 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-20 10:55:51.021706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-20 10:55:51.021710 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-20 10:55:51.021715 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-20 10:55:51.021720 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-20 10:55:51.021725 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-20 10:55:51.021736 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-20 10:55:51.021742 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-20 10:55:51.021746 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-20 10:55:51.021751 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-20 10:55:51.021756 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-20 10:55:51.021761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-20 10:55:51.021766 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-20 10:55:51.021770 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-20 10:55:51.021775 | orchestrator | 2025-09-20 10:55:51.021780 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-20 10:55:51.021785 | orchestrator | Saturday 20 September 2025 10:51:52 +0000 (0:00:02.018) 0:06:35.346 **** 2025-09-20 10:55:51.021790 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.021795 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.021800 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.021805 | orchestrator | 2025-09-20 10:55:51.021809 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-20 10:55:51.021814 | orchestrator | Saturday 20 September 2025 10:51:52 +0000 (0:00:00.303) 0:06:35.649 **** 2025-09-20 10:55:51.021819 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.021824 | orchestrator | 2025-09-20 10:55:51.021829 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-20 10:55:51.021834 | orchestrator | Saturday 20 September 2025 10:51:53 +0000 (0:00:00.761) 0:06:36.411 **** 2025-09-20 10:55:51.021838 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-20 10:55:51.021843 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-20 10:55:51.021851 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-20 10:55:51.021856 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-20 10:55:51.021861 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-20 10:55:51.021866 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-20 10:55:51.021871 | orchestrator | 2025-09-20 10:55:51.021876 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-20 10:55:51.021880 | orchestrator | Saturday 20 September 2025 10:51:53 +0000 (0:00:00.840) 0:06:37.252 **** 2025-09-20 10:55:51.021885 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.021890 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 10:55:51.021895 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:55:51.021900 | orchestrator | 2025-09-20 10:55:51.021904 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-20 10:55:51.021909 | orchestrator | Saturday 20 September 2025 10:51:55 +0000 (0:00:01.933) 0:06:39.186 **** 2025-09-20 10:55:51.021914 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:55:51.021919 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 10:55:51.021924 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.021928 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:55:51.021933 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-20 10:55:51.021938 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.021943 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:55:51.021948 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-20 10:55:51.021952 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.021957 | orchestrator | 2025-09-20 10:55:51.021962 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-20 10:55:51.021967 | orchestrator | Saturday 20 September 2025 10:51:57 +0000 (0:00:01.188) 0:06:40.375 **** 2025-09-20 10:55:51.021972 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.021977 | orchestrator | 2025-09-20 10:55:51.021981 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-20 10:55:51.021986 | orchestrator | Saturday 20 September 2025 10:51:58 +0000 (0:00:01.867) 0:06:42.243 **** 2025-09-20 10:55:51.021991 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.021996 | orchestrator | 2025-09-20 10:55:51.022001 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-20 10:55:51.022006 | orchestrator | Saturday 20 September 2025 10:51:59 +0000 (0:00:00.533) 0:06:42.777 **** 2025-09-20 10:55:51.022011 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6a9e85d2-bd62-5d0b-9b06-ebe373b508be', 'data_vg': 'ceph-6a9e85d2-bd62-5d0b-9b06-ebe373b508be'}) 2025-09-20 10:55:51.022048 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43c75cb2-27fe-5978-b049-f1a35c211e19', 'data_vg': 'ceph-43c75cb2-27fe-5978-b049-f1a35c211e19'}) 2025-09-20 10:55:51.022053 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8bfbaad6-401f-511d-91f2-acbf67028504', 'data_vg': 'ceph-8bfbaad6-401f-511d-91f2-acbf67028504'}) 2025-09-20 10:55:51.022058 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d7feb156-b84d-561e-a62b-66fdb35e8084', 'data_vg': 'ceph-d7feb156-b84d-561e-a62b-66fdb35e8084'}) 2025-09-20 10:55:51.022069 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44b8c0b1-de10-587f-a252-374190a68e04', 'data_vg': 'ceph-44b8c0b1-de10-587f-a252-374190a68e04'}) 2025-09-20 10:55:51.022074 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f41c3a47-393d-5abf-86b9-e0c2e1b7064d', 'data_vg': 'ceph-f41c3a47-393d-5abf-86b9-e0c2e1b7064d'}) 2025-09-20 10:55:51.022079 | orchestrator | 2025-09-20 10:55:51.022084 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-20 10:55:51.022095 | orchestrator | Saturday 20 September 2025 10:52:39 +0000 (0:00:40.075) 0:07:22.853 **** 2025-09-20 10:55:51.022112 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022117 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022122 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022127 | orchestrator | 2025-09-20 10:55:51.022132 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-20 10:55:51.022136 | orchestrator | Saturday 20 September 2025 10:52:40 +0000 (0:00:00.516) 0:07:23.369 **** 2025-09-20 10:55:51.022141 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.022146 | orchestrator | 2025-09-20 10:55:51.022151 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-20 10:55:51.022156 | orchestrator | Saturday 20 September 2025 10:52:40 +0000 (0:00:00.506) 0:07:23.876 **** 2025-09-20 10:55:51.022161 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.022165 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.022170 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.022175 | orchestrator | 2025-09-20 10:55:51.022180 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-20 10:55:51.022185 | orchestrator | Saturday 20 September 2025 10:52:41 +0000 (0:00:00.613) 0:07:24.490 **** 2025-09-20 10:55:51.022190 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.022195 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.022199 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.022204 | orchestrator | 2025-09-20 10:55:51.022209 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-20 10:55:51.022214 | orchestrator | Saturday 20 September 2025 10:52:43 +0000 (0:00:02.524) 0:07:27.014 **** 2025-09-20 10:55:51.022219 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.022224 | orchestrator | 2025-09-20 10:55:51.022228 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-20 10:55:51.022233 | orchestrator | Saturday 20 September 2025 10:52:44 +0000 (0:00:00.483) 0:07:27.498 **** 2025-09-20 10:55:51.022238 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.022243 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.022248 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.022253 | orchestrator | 2025-09-20 10:55:51.022258 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-20 10:55:51.022263 | orchestrator | Saturday 20 September 2025 10:52:45 +0000 (0:00:01.040) 0:07:28.538 **** 2025-09-20 10:55:51.022268 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.022272 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.022277 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.022282 | orchestrator | 2025-09-20 10:55:51.022287 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-20 10:55:51.022292 | orchestrator | Saturday 20 September 2025 10:52:46 +0000 (0:00:01.322) 0:07:29.861 **** 2025-09-20 10:55:51.022297 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.022302 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.022307 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.022311 | orchestrator | 2025-09-20 10:55:51.022316 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-20 10:55:51.022321 | orchestrator | Saturday 20 September 2025 10:52:48 +0000 (0:00:01.586) 0:07:31.447 **** 2025-09-20 10:55:51.022326 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022331 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022336 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022340 | orchestrator | 2025-09-20 10:55:51.022345 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-20 10:55:51.022350 | orchestrator | Saturday 20 September 2025 10:52:48 +0000 (0:00:00.328) 0:07:31.776 **** 2025-09-20 10:55:51.022361 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022365 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022370 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022375 | orchestrator | 2025-09-20 10:55:51.022380 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-20 10:55:51.022385 | orchestrator | Saturday 20 September 2025 10:52:48 +0000 (0:00:00.350) 0:07:32.127 **** 2025-09-20 10:55:51.022390 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 10:55:51.022395 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-20 10:55:51.022399 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-20 10:55:51.022404 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-09-20 10:55:51.022409 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-09-20 10:55:51.022414 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-20 10:55:51.022418 | orchestrator | 2025-09-20 10:55:51.022423 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-20 10:55:51.022428 | orchestrator | Saturday 20 September 2025 10:52:50 +0000 (0:00:01.228) 0:07:33.355 **** 2025-09-20 10:55:51.022433 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-20 10:55:51.022438 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-20 10:55:51.022443 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-20 10:55:51.022447 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-20 10:55:51.022452 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-20 10:55:51.022457 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-20 10:55:51.022462 | orchestrator | 2025-09-20 10:55:51.022467 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-20 10:55:51.022471 | orchestrator | Saturday 20 September 2025 10:52:52 +0000 (0:00:02.132) 0:07:35.487 **** 2025-09-20 10:55:51.022482 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-20 10:55:51.022487 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-20 10:55:51.022492 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-20 10:55:51.022496 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-20 10:55:51.022501 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-20 10:55:51.022506 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-20 10:55:51.022511 | orchestrator | 2025-09-20 10:55:51.022516 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-20 10:55:51.022521 | orchestrator | Saturday 20 September 2025 10:52:55 +0000 (0:00:03.229) 0:07:38.717 **** 2025-09-20 10:55:51.022526 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022531 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022535 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.022540 | orchestrator | 2025-09-20 10:55:51.022545 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-20 10:55:51.022550 | orchestrator | Saturday 20 September 2025 10:52:58 +0000 (0:00:02.734) 0:07:41.452 **** 2025-09-20 10:55:51.022555 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022559 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022564 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-20 10:55:51.022569 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.022574 | orchestrator | 2025-09-20 10:55:51.022579 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-20 10:55:51.022584 | orchestrator | Saturday 20 September 2025 10:53:10 +0000 (0:00:12.790) 0:07:54.243 **** 2025-09-20 10:55:51.022589 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022593 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022598 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022603 | orchestrator | 2025-09-20 10:55:51.022608 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 10:55:51.022613 | orchestrator | Saturday 20 September 2025 10:53:11 +0000 (0:00:00.776) 0:07:55.019 **** 2025-09-20 10:55:51.022621 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022626 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022630 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022635 | orchestrator | 2025-09-20 10:55:51.022640 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-20 10:55:51.022645 | orchestrator | Saturday 20 September 2025 10:53:12 +0000 (0:00:00.494) 0:07:55.514 **** 2025-09-20 10:55:51.022650 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.022655 | orchestrator | 2025-09-20 10:55:51.022659 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-20 10:55:51.022664 | orchestrator | Saturday 20 September 2025 10:53:12 +0000 (0:00:00.498) 0:07:56.013 **** 2025-09-20 10:55:51.022669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.022674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.022679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.022684 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022689 | orchestrator | 2025-09-20 10:55:51.022693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-20 10:55:51.022698 | orchestrator | Saturday 20 September 2025 10:53:13 +0000 (0:00:00.397) 0:07:56.410 **** 2025-09-20 10:55:51.022703 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022708 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022713 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022718 | orchestrator | 2025-09-20 10:55:51.022722 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-20 10:55:51.022727 | orchestrator | Saturday 20 September 2025 10:53:13 +0000 (0:00:00.255) 0:07:56.665 **** 2025-09-20 10:55:51.022732 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022737 | orchestrator | 2025-09-20 10:55:51.022742 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-20 10:55:51.022747 | orchestrator | Saturday 20 September 2025 10:53:13 +0000 (0:00:00.179) 0:07:56.845 **** 2025-09-20 10:55:51.022751 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022756 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022761 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022766 | orchestrator | 2025-09-20 10:55:51.022771 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-20 10:55:51.022775 | orchestrator | Saturday 20 September 2025 10:53:13 +0000 (0:00:00.442) 0:07:57.287 **** 2025-09-20 10:55:51.022780 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022785 | orchestrator | 2025-09-20 10:55:51.022790 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-20 10:55:51.022795 | orchestrator | Saturday 20 September 2025 10:53:14 +0000 (0:00:00.194) 0:07:57.482 **** 2025-09-20 10:55:51.022800 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022804 | orchestrator | 2025-09-20 10:55:51.022809 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-20 10:55:51.022814 | orchestrator | Saturday 20 September 2025 10:53:14 +0000 (0:00:00.180) 0:07:57.662 **** 2025-09-20 10:55:51.022819 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022824 | orchestrator | 2025-09-20 10:55:51.022829 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-20 10:55:51.022833 | orchestrator | Saturday 20 September 2025 10:53:14 +0000 (0:00:00.106) 0:07:57.768 **** 2025-09-20 10:55:51.022838 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022843 | orchestrator | 2025-09-20 10:55:51.022848 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-20 10:55:51.022853 | orchestrator | Saturday 20 September 2025 10:53:14 +0000 (0:00:00.206) 0:07:57.975 **** 2025-09-20 10:55:51.022858 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022862 | orchestrator | 2025-09-20 10:55:51.022867 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-20 10:55:51.022881 | orchestrator | Saturday 20 September 2025 10:53:14 +0000 (0:00:00.186) 0:07:58.162 **** 2025-09-20 10:55:51.022886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.022891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.022896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.022900 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022905 | orchestrator | 2025-09-20 10:55:51.022910 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-20 10:55:51.022915 | orchestrator | Saturday 20 September 2025 10:53:15 +0000 (0:00:00.338) 0:07:58.501 **** 2025-09-20 10:55:51.022920 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022925 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.022929 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.022934 | orchestrator | 2025-09-20 10:55:51.022939 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-20 10:55:51.022944 | orchestrator | Saturday 20 September 2025 10:53:15 +0000 (0:00:00.316) 0:07:58.817 **** 2025-09-20 10:55:51.022949 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022954 | orchestrator | 2025-09-20 10:55:51.022959 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-20 10:55:51.022963 | orchestrator | Saturday 20 September 2025 10:53:16 +0000 (0:00:00.789) 0:07:59.607 **** 2025-09-20 10:55:51.022968 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.022973 | orchestrator | 2025-09-20 10:55:51.022978 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-20 10:55:51.022983 | orchestrator | 2025-09-20 10:55:51.022987 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.022992 | orchestrator | Saturday 20 September 2025 10:53:16 +0000 (0:00:00.695) 0:08:00.302 **** 2025-09-20 10:55:51.022997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.023003 | orchestrator | 2025-09-20 10:55:51.023008 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.023013 | orchestrator | Saturday 20 September 2025 10:53:18 +0000 (0:00:01.238) 0:08:01.541 **** 2025-09-20 10:55:51.023018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.023023 | orchestrator | 2025-09-20 10:55:51.023027 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.023032 | orchestrator | Saturday 20 September 2025 10:53:19 +0000 (0:00:01.275) 0:08:02.817 **** 2025-09-20 10:55:51.023037 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023042 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023047 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023051 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023056 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023061 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023066 | orchestrator | 2025-09-20 10:55:51.023071 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.023076 | orchestrator | Saturday 20 September 2025 10:53:20 +0000 (0:00:01.275) 0:08:04.092 **** 2025-09-20 10:55:51.023081 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023085 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023090 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023095 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023128 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023133 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023138 | orchestrator | 2025-09-20 10:55:51.023143 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.023152 | orchestrator | Saturday 20 September 2025 10:53:21 +0000 (0:00:00.695) 0:08:04.788 **** 2025-09-20 10:55:51.023157 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023161 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023166 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023171 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023176 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023181 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023186 | orchestrator | 2025-09-20 10:55:51.023190 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.023195 | orchestrator | Saturday 20 September 2025 10:53:22 +0000 (0:00:00.976) 0:08:05.764 **** 2025-09-20 10:55:51.023200 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023205 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023210 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023215 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023219 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023225 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023229 | orchestrator | 2025-09-20 10:55:51.023234 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.023239 | orchestrator | Saturday 20 September 2025 10:53:23 +0000 (0:00:00.807) 0:08:06.572 **** 2025-09-20 10:55:51.023243 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023248 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023252 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023257 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023262 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023266 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023271 | orchestrator | 2025-09-20 10:55:51.023275 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.023280 | orchestrator | Saturday 20 September 2025 10:53:24 +0000 (0:00:01.005) 0:08:07.578 **** 2025-09-20 10:55:51.023285 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023289 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023294 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023298 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023303 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023308 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023312 | orchestrator | 2025-09-20 10:55:51.023317 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.023327 | orchestrator | Saturday 20 September 2025 10:53:25 +0000 (0:00:00.879) 0:08:08.457 **** 2025-09-20 10:55:51.023331 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023336 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023341 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023345 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023350 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023354 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023359 | orchestrator | 2025-09-20 10:55:51.023363 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.023368 | orchestrator | Saturday 20 September 2025 10:53:25 +0000 (0:00:00.608) 0:08:09.066 **** 2025-09-20 10:55:51.023373 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023377 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023382 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023387 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023391 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023396 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023400 | orchestrator | 2025-09-20 10:55:51.023405 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.023410 | orchestrator | Saturday 20 September 2025 10:53:27 +0000 (0:00:01.332) 0:08:10.398 **** 2025-09-20 10:55:51.023414 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023419 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023423 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023431 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023436 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023440 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023445 | orchestrator | 2025-09-20 10:55:51.023449 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.023454 | orchestrator | Saturday 20 September 2025 10:53:28 +0000 (0:00:01.003) 0:08:11.402 **** 2025-09-20 10:55:51.023459 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023463 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023468 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023473 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023477 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023482 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023486 | orchestrator | 2025-09-20 10:55:51.023491 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.023496 | orchestrator | Saturday 20 September 2025 10:53:28 +0000 (0:00:00.867) 0:08:12.270 **** 2025-09-20 10:55:51.023500 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023505 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023509 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023514 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023518 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023523 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023528 | orchestrator | 2025-09-20 10:55:51.023532 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.023537 | orchestrator | Saturday 20 September 2025 10:53:29 +0000 (0:00:00.616) 0:08:12.886 **** 2025-09-20 10:55:51.023542 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023546 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023551 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023556 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023560 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023565 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023569 | orchestrator | 2025-09-20 10:55:51.023574 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.023579 | orchestrator | Saturday 20 September 2025 10:53:30 +0000 (0:00:00.669) 0:08:13.555 **** 2025-09-20 10:55:51.023583 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023588 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023593 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023597 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023602 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023606 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023611 | orchestrator | 2025-09-20 10:55:51.023615 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.023620 | orchestrator | Saturday 20 September 2025 10:53:30 +0000 (0:00:00.546) 0:08:14.102 **** 2025-09-20 10:55:51.023625 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023629 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023634 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023639 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023643 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023648 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023652 | orchestrator | 2025-09-20 10:55:51.023657 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.023662 | orchestrator | Saturday 20 September 2025 10:53:31 +0000 (0:00:00.685) 0:08:14.787 **** 2025-09-20 10:55:51.023666 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023671 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023675 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023680 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023684 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023689 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023694 | orchestrator | 2025-09-20 10:55:51.023698 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.023706 | orchestrator | Saturday 20 September 2025 10:53:31 +0000 (0:00:00.522) 0:08:15.310 **** 2025-09-20 10:55:51.023711 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023715 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023720 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023724 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:55:51.023729 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:55:51.023733 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:55:51.023738 | orchestrator | 2025-09-20 10:55:51.023743 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.023747 | orchestrator | Saturday 20 September 2025 10:53:32 +0000 (0:00:00.700) 0:08:16.011 **** 2025-09-20 10:55:51.023752 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.023756 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.023761 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.023766 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023770 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023775 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023779 | orchestrator | 2025-09-20 10:55:51.023790 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.023795 | orchestrator | Saturday 20 September 2025 10:53:33 +0000 (0:00:00.549) 0:08:16.560 **** 2025-09-20 10:55:51.023799 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023804 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023808 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023813 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023817 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023822 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023827 | orchestrator | 2025-09-20 10:55:51.023831 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.023836 | orchestrator | Saturday 20 September 2025 10:53:33 +0000 (0:00:00.665) 0:08:17.226 **** 2025-09-20 10:55:51.023841 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.023845 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.023850 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.023854 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023859 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.023863 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.023868 | orchestrator | 2025-09-20 10:55:51.023872 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-20 10:55:51.023877 | orchestrator | Saturday 20 September 2025 10:53:35 +0000 (0:00:01.095) 0:08:18.322 **** 2025-09-20 10:55:51.023881 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.023886 | orchestrator | 2025-09-20 10:55:51.023891 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-20 10:55:51.023895 | orchestrator | Saturday 20 September 2025 10:53:39 +0000 (0:00:04.047) 0:08:22.369 **** 2025-09-20 10:55:51.023900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.023904 | orchestrator | 2025-09-20 10:55:51.023909 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-20 10:55:51.023914 | orchestrator | Saturday 20 September 2025 10:53:40 +0000 (0:00:01.928) 0:08:24.298 **** 2025-09-20 10:55:51.023918 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.023923 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.023928 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.023932 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.023937 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.023941 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.023946 | orchestrator | 2025-09-20 10:55:51.023951 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-20 10:55:51.023955 | orchestrator | Saturday 20 September 2025 10:53:42 +0000 (0:00:01.328) 0:08:25.626 **** 2025-09-20 10:55:51.023960 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.023967 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.023972 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.023977 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.023981 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.023986 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.023990 | orchestrator | 2025-09-20 10:55:51.023995 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-20 10:55:51.024000 | orchestrator | Saturday 20 September 2025 10:53:43 +0000 (0:00:01.071) 0:08:26.697 **** 2025-09-20 10:55:51.024004 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.024010 | orchestrator | 2025-09-20 10:55:51.024015 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-20 10:55:51.024020 | orchestrator | Saturday 20 September 2025 10:53:44 +0000 (0:00:01.053) 0:08:27.751 **** 2025-09-20 10:55:51.024024 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.024029 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.024033 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.024038 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.024042 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.024047 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.024051 | orchestrator | 2025-09-20 10:55:51.024056 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-20 10:55:51.024061 | orchestrator | Saturday 20 September 2025 10:53:45 +0000 (0:00:01.379) 0:08:29.130 **** 2025-09-20 10:55:51.024065 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.024070 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.024074 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.024079 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.024084 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.024088 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.024093 | orchestrator | 2025-09-20 10:55:51.024097 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-20 10:55:51.024112 | orchestrator | Saturday 20 September 2025 10:53:49 +0000 (0:00:03.517) 0:08:32.647 **** 2025-09-20 10:55:51.024117 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:55:51.024121 | orchestrator | 2025-09-20 10:55:51.024126 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-20 10:55:51.024131 | orchestrator | Saturday 20 September 2025 10:53:50 +0000 (0:00:01.229) 0:08:33.877 **** 2025-09-20 10:55:51.024135 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024140 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024144 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024149 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.024154 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.024158 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.024163 | orchestrator | 2025-09-20 10:55:51.024167 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-20 10:55:51.024172 | orchestrator | Saturday 20 September 2025 10:53:51 +0000 (0:00:00.709) 0:08:34.587 **** 2025-09-20 10:55:51.024177 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.024181 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.024186 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.024190 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:55:51.024195 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:55:51.024199 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:55:51.024204 | orchestrator | 2025-09-20 10:55:51.024214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-20 10:55:51.024219 | orchestrator | Saturday 20 September 2025 10:53:53 +0000 (0:00:02.468) 0:08:37.056 **** 2025-09-20 10:55:51.024223 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024242 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024247 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024251 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:55:51.024256 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:55:51.024260 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:55:51.024265 | orchestrator | 2025-09-20 10:55:51.024270 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-20 10:55:51.024274 | orchestrator | 2025-09-20 10:55:51.024279 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.024284 | orchestrator | Saturday 20 September 2025 10:53:54 +0000 (0:00:00.706) 0:08:37.763 **** 2025-09-20 10:55:51.024288 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.024293 | orchestrator | 2025-09-20 10:55:51.024298 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.024302 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:00.645) 0:08:38.408 **** 2025-09-20 10:55:51.024307 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.024312 | orchestrator | 2025-09-20 10:55:51.024316 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.024321 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:00.491) 0:08:38.899 **** 2025-09-20 10:55:51.024325 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024330 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024334 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024339 | orchestrator | 2025-09-20 10:55:51.024344 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.024348 | orchestrator | Saturday 20 September 2025 10:53:56 +0000 (0:00:00.560) 0:08:39.460 **** 2025-09-20 10:55:51.024353 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024358 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024362 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024367 | orchestrator | 2025-09-20 10:55:51.024371 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.024376 | orchestrator | Saturday 20 September 2025 10:53:56 +0000 (0:00:00.730) 0:08:40.190 **** 2025-09-20 10:55:51.024380 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024385 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024390 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024394 | orchestrator | 2025-09-20 10:55:51.024399 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.024403 | orchestrator | Saturday 20 September 2025 10:53:57 +0000 (0:00:00.733) 0:08:40.923 **** 2025-09-20 10:55:51.024408 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024412 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024417 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024421 | orchestrator | 2025-09-20 10:55:51.024426 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.024431 | orchestrator | Saturday 20 September 2025 10:53:58 +0000 (0:00:00.700) 0:08:41.625 **** 2025-09-20 10:55:51.024435 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024440 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024445 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024449 | orchestrator | 2025-09-20 10:55:51.024454 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.024459 | orchestrator | Saturday 20 September 2025 10:53:58 +0000 (0:00:00.634) 0:08:42.260 **** 2025-09-20 10:55:51.024463 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024468 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024472 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024477 | orchestrator | 2025-09-20 10:55:51.024482 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.024486 | orchestrator | Saturday 20 September 2025 10:53:59 +0000 (0:00:00.317) 0:08:42.578 **** 2025-09-20 10:55:51.024494 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024498 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024503 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024508 | orchestrator | 2025-09-20 10:55:51.024512 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.024517 | orchestrator | Saturday 20 September 2025 10:53:59 +0000 (0:00:00.305) 0:08:42.883 **** 2025-09-20 10:55:51.024521 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024526 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024531 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024535 | orchestrator | 2025-09-20 10:55:51.024540 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.024544 | orchestrator | Saturday 20 September 2025 10:54:00 +0000 (0:00:00.733) 0:08:43.617 **** 2025-09-20 10:55:51.024549 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024554 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024558 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024563 | orchestrator | 2025-09-20 10:55:51.024567 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.024572 | orchestrator | Saturday 20 September 2025 10:54:01 +0000 (0:00:01.079) 0:08:44.697 **** 2025-09-20 10:55:51.024576 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024581 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024586 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024590 | orchestrator | 2025-09-20 10:55:51.024595 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.024600 | orchestrator | Saturday 20 September 2025 10:54:01 +0000 (0:00:00.316) 0:08:45.014 **** 2025-09-20 10:55:51.024604 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024609 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024613 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024618 | orchestrator | 2025-09-20 10:55:51.024623 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.024632 | orchestrator | Saturday 20 September 2025 10:54:02 +0000 (0:00:00.341) 0:08:45.356 **** 2025-09-20 10:55:51.024637 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024642 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024646 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024651 | orchestrator | 2025-09-20 10:55:51.024656 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.024660 | orchestrator | Saturday 20 September 2025 10:54:02 +0000 (0:00:00.339) 0:08:45.695 **** 2025-09-20 10:55:51.024665 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024669 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024674 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024679 | orchestrator | 2025-09-20 10:55:51.024683 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.024688 | orchestrator | Saturday 20 September 2025 10:54:02 +0000 (0:00:00.522) 0:08:46.218 **** 2025-09-20 10:55:51.024692 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024697 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024701 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024706 | orchestrator | 2025-09-20 10:55:51.024711 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.024715 | orchestrator | Saturday 20 September 2025 10:54:03 +0000 (0:00:00.283) 0:08:46.502 **** 2025-09-20 10:55:51.024720 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024725 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024729 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024734 | orchestrator | 2025-09-20 10:55:51.024739 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.024743 | orchestrator | Saturday 20 September 2025 10:54:03 +0000 (0:00:00.285) 0:08:46.787 **** 2025-09-20 10:55:51.024748 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024757 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024762 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024766 | orchestrator | 2025-09-20 10:55:51.024771 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.024775 | orchestrator | Saturday 20 September 2025 10:54:03 +0000 (0:00:00.288) 0:08:47.075 **** 2025-09-20 10:55:51.024780 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024785 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024789 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024794 | orchestrator | 2025-09-20 10:55:51.024799 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.024803 | orchestrator | Saturday 20 September 2025 10:54:04 +0000 (0:00:00.499) 0:08:47.575 **** 2025-09-20 10:55:51.024808 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024812 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024817 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024821 | orchestrator | 2025-09-20 10:55:51.024826 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.024831 | orchestrator | Saturday 20 September 2025 10:54:04 +0000 (0:00:00.400) 0:08:47.975 **** 2025-09-20 10:55:51.024835 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.024840 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.024844 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.024849 | orchestrator | 2025-09-20 10:55:51.024853 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-20 10:55:51.024858 | orchestrator | Saturday 20 September 2025 10:54:05 +0000 (0:00:00.470) 0:08:48.445 **** 2025-09-20 10:55:51.024863 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.024867 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.024872 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-20 10:55:51.024877 | orchestrator | 2025-09-20 10:55:51.024881 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-20 10:55:51.024886 | orchestrator | Saturday 20 September 2025 10:54:05 +0000 (0:00:00.581) 0:08:49.027 **** 2025-09-20 10:55:51.024890 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.024895 | orchestrator | 2025-09-20 10:55:51.024900 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-20 10:55:51.024904 | orchestrator | Saturday 20 September 2025 10:54:07 +0000 (0:00:01.997) 0:08:51.024 **** 2025-09-20 10:55:51.024909 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-20 10:55:51.024916 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.024920 | orchestrator | 2025-09-20 10:55:51.024925 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-20 10:55:51.024930 | orchestrator | Saturday 20 September 2025 10:54:07 +0000 (0:00:00.175) 0:08:51.200 **** 2025-09-20 10:55:51.024935 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:55:51.024945 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:55:51.024950 | orchestrator | 2025-09-20 10:55:51.024955 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-20 10:55:51.024960 | orchestrator | Saturday 20 September 2025 10:54:14 +0000 (0:00:06.458) 0:08:57.659 **** 2025-09-20 10:55:51.024964 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:55:51.024972 | orchestrator | 2025-09-20 10:55:51.024977 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-20 10:55:51.024986 | orchestrator | Saturday 20 September 2025 10:54:17 +0000 (0:00:03.415) 0:09:01.074 **** 2025-09-20 10:55:51.024991 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.024996 | orchestrator | 2025-09-20 10:55:51.025001 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-20 10:55:51.025005 | orchestrator | Saturday 20 September 2025 10:54:18 +0000 (0:00:00.890) 0:09:01.965 **** 2025-09-20 10:55:51.025010 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-20 10:55:51.025015 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-20 10:55:51.025019 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-20 10:55:51.025024 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-20 10:55:51.025029 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-20 10:55:51.025033 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-20 10:55:51.025038 | orchestrator | 2025-09-20 10:55:51.025042 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-20 10:55:51.025047 | orchestrator | Saturday 20 September 2025 10:54:19 +0000 (0:00:01.237) 0:09:03.203 **** 2025-09-20 10:55:51.025052 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.025056 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 10:55:51.025061 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:55:51.025065 | orchestrator | 2025-09-20 10:55:51.025070 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-20 10:55:51.025075 | orchestrator | Saturday 20 September 2025 10:54:22 +0000 (0:00:02.453) 0:09:05.656 **** 2025-09-20 10:55:51.025079 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:55:51.025084 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-20 10:55:51.025088 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025093 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:55:51.025098 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 10:55:51.025114 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025118 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:55:51.025123 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-20 10:55:51.025127 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025132 | orchestrator | 2025-09-20 10:55:51.025137 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-20 10:55:51.025141 | orchestrator | Saturday 20 September 2025 10:54:23 +0000 (0:00:01.430) 0:09:07.087 **** 2025-09-20 10:55:51.025146 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025151 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025155 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025160 | orchestrator | 2025-09-20 10:55:51.025164 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-20 10:55:51.025169 | orchestrator | Saturday 20 September 2025 10:54:26 +0000 (0:00:03.060) 0:09:10.147 **** 2025-09-20 10:55:51.025174 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025178 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025183 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025187 | orchestrator | 2025-09-20 10:55:51.025192 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-20 10:55:51.025197 | orchestrator | Saturday 20 September 2025 10:54:27 +0000 (0:00:00.397) 0:09:10.545 **** 2025-09-20 10:55:51.025201 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.025210 | orchestrator | 2025-09-20 10:55:51.025215 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-20 10:55:51.025220 | orchestrator | Saturday 20 September 2025 10:54:27 +0000 (0:00:00.614) 0:09:11.159 **** 2025-09-20 10:55:51.025225 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.025229 | orchestrator | 2025-09-20 10:55:51.025234 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-20 10:55:51.025239 | orchestrator | Saturday 20 September 2025 10:54:28 +0000 (0:00:00.613) 0:09:11.772 **** 2025-09-20 10:55:51.025243 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025248 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025253 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025257 | orchestrator | 2025-09-20 10:55:51.025262 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-20 10:55:51.025266 | orchestrator | Saturday 20 September 2025 10:54:29 +0000 (0:00:01.172) 0:09:12.945 **** 2025-09-20 10:55:51.025271 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025276 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025280 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025285 | orchestrator | 2025-09-20 10:55:51.025290 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-20 10:55:51.025294 | orchestrator | Saturday 20 September 2025 10:54:30 +0000 (0:00:01.092) 0:09:14.038 **** 2025-09-20 10:55:51.025299 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025304 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025308 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025313 | orchestrator | 2025-09-20 10:55:51.025317 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-20 10:55:51.025322 | orchestrator | Saturday 20 September 2025 10:54:32 +0000 (0:00:01.556) 0:09:15.594 **** 2025-09-20 10:55:51.025327 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025331 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025336 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025341 | orchestrator | 2025-09-20 10:55:51.025345 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-20 10:55:51.025350 | orchestrator | Saturday 20 September 2025 10:54:34 +0000 (0:00:02.083) 0:09:17.678 **** 2025-09-20 10:55:51.025360 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025365 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025369 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025374 | orchestrator | 2025-09-20 10:55:51.025379 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 10:55:51.025383 | orchestrator | Saturday 20 September 2025 10:54:35 +0000 (0:00:01.245) 0:09:18.923 **** 2025-09-20 10:55:51.025388 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025392 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025397 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025402 | orchestrator | 2025-09-20 10:55:51.025406 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-20 10:55:51.025411 | orchestrator | Saturday 20 September 2025 10:54:36 +0000 (0:00:01.013) 0:09:19.936 **** 2025-09-20 10:55:51.025416 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.025420 | orchestrator | 2025-09-20 10:55:51.025425 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-20 10:55:51.025430 | orchestrator | Saturday 20 September 2025 10:54:37 +0000 (0:00:00.548) 0:09:20.485 **** 2025-09-20 10:55:51.025434 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025439 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025443 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025448 | orchestrator | 2025-09-20 10:55:51.025453 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-20 10:55:51.025457 | orchestrator | Saturday 20 September 2025 10:54:37 +0000 (0:00:00.348) 0:09:20.834 **** 2025-09-20 10:55:51.025465 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.025470 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.025474 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.025479 | orchestrator | 2025-09-20 10:55:51.025484 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-20 10:55:51.025488 | orchestrator | Saturday 20 September 2025 10:54:39 +0000 (0:00:01.561) 0:09:22.395 **** 2025-09-20 10:55:51.025493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.025497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.025502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.025507 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025511 | orchestrator | 2025-09-20 10:55:51.025516 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-20 10:55:51.025521 | orchestrator | Saturday 20 September 2025 10:54:39 +0000 (0:00:00.710) 0:09:23.106 **** 2025-09-20 10:55:51.025525 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025530 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025535 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025539 | orchestrator | 2025-09-20 10:55:51.025544 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-20 10:55:51.025548 | orchestrator | 2025-09-20 10:55:51.025553 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 10:55:51.025558 | orchestrator | Saturday 20 September 2025 10:54:40 +0000 (0:00:00.691) 0:09:23.797 **** 2025-09-20 10:55:51.025563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.025567 | orchestrator | 2025-09-20 10:55:51.025572 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 10:55:51.025577 | orchestrator | Saturday 20 September 2025 10:54:41 +0000 (0:00:00.761) 0:09:24.558 **** 2025-09-20 10:55:51.025581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.025586 | orchestrator | 2025-09-20 10:55:51.025591 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 10:55:51.025595 | orchestrator | Saturday 20 September 2025 10:54:41 +0000 (0:00:00.532) 0:09:25.091 **** 2025-09-20 10:55:51.025600 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025605 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025609 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025614 | orchestrator | 2025-09-20 10:55:51.025619 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 10:55:51.025623 | orchestrator | Saturday 20 September 2025 10:54:42 +0000 (0:00:00.529) 0:09:25.620 **** 2025-09-20 10:55:51.025628 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025632 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025637 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025642 | orchestrator | 2025-09-20 10:55:51.025646 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 10:55:51.025651 | orchestrator | Saturday 20 September 2025 10:54:43 +0000 (0:00:00.738) 0:09:26.359 **** 2025-09-20 10:55:51.025656 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025660 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025665 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025669 | orchestrator | 2025-09-20 10:55:51.025674 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 10:55:51.025679 | orchestrator | Saturday 20 September 2025 10:54:43 +0000 (0:00:00.727) 0:09:27.086 **** 2025-09-20 10:55:51.025683 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025688 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025692 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025697 | orchestrator | 2025-09-20 10:55:51.025702 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 10:55:51.025709 | orchestrator | Saturday 20 September 2025 10:54:44 +0000 (0:00:00.692) 0:09:27.779 **** 2025-09-20 10:55:51.025714 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025718 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025723 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025728 | orchestrator | 2025-09-20 10:55:51.025732 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 10:55:51.025737 | orchestrator | Saturday 20 September 2025 10:54:45 +0000 (0:00:00.563) 0:09:28.342 **** 2025-09-20 10:55:51.025742 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025746 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025755 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025760 | orchestrator | 2025-09-20 10:55:51.025765 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 10:55:51.025770 | orchestrator | Saturday 20 September 2025 10:54:45 +0000 (0:00:00.339) 0:09:28.682 **** 2025-09-20 10:55:51.025774 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025779 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025783 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025788 | orchestrator | 2025-09-20 10:55:51.025793 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 10:55:51.025797 | orchestrator | Saturday 20 September 2025 10:54:45 +0000 (0:00:00.356) 0:09:29.038 **** 2025-09-20 10:55:51.025802 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025807 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025811 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025816 | orchestrator | 2025-09-20 10:55:51.025821 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 10:55:51.025826 | orchestrator | Saturday 20 September 2025 10:54:46 +0000 (0:00:00.705) 0:09:29.744 **** 2025-09-20 10:55:51.025830 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025835 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025839 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025844 | orchestrator | 2025-09-20 10:55:51.025849 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 10:55:51.025853 | orchestrator | Saturday 20 September 2025 10:54:47 +0000 (0:00:00.983) 0:09:30.728 **** 2025-09-20 10:55:51.025858 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025862 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025867 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025872 | orchestrator | 2025-09-20 10:55:51.025876 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 10:55:51.025881 | orchestrator | Saturday 20 September 2025 10:54:47 +0000 (0:00:00.307) 0:09:31.035 **** 2025-09-20 10:55:51.025886 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.025890 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.025895 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.025900 | orchestrator | 2025-09-20 10:55:51.025904 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 10:55:51.025909 | orchestrator | Saturday 20 September 2025 10:54:48 +0000 (0:00:00.307) 0:09:31.343 **** 2025-09-20 10:55:51.025913 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025918 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025923 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025927 | orchestrator | 2025-09-20 10:55:51.025932 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 10:55:51.025937 | orchestrator | Saturday 20 September 2025 10:54:48 +0000 (0:00:00.334) 0:09:31.678 **** 2025-09-20 10:55:51.025941 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025946 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025951 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025955 | orchestrator | 2025-09-20 10:55:51.025960 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 10:55:51.025965 | orchestrator | Saturday 20 September 2025 10:54:48 +0000 (0:00:00.597) 0:09:32.276 **** 2025-09-20 10:55:51.025972 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.025977 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.025981 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.025986 | orchestrator | 2025-09-20 10:55:51.025991 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 10:55:51.025996 | orchestrator | Saturday 20 September 2025 10:54:49 +0000 (0:00:00.349) 0:09:32.626 **** 2025-09-20 10:55:51.026001 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026005 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026010 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026030 | orchestrator | 2025-09-20 10:55:51.026035 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 10:55:51.026039 | orchestrator | Saturday 20 September 2025 10:54:49 +0000 (0:00:00.293) 0:09:32.919 **** 2025-09-20 10:55:51.026044 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026049 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026053 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026058 | orchestrator | 2025-09-20 10:55:51.026063 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 10:55:51.026067 | orchestrator | Saturday 20 September 2025 10:54:49 +0000 (0:00:00.275) 0:09:33.195 **** 2025-09-20 10:55:51.026072 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026076 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026081 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026086 | orchestrator | 2025-09-20 10:55:51.026090 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 10:55:51.026095 | orchestrator | Saturday 20 September 2025 10:54:50 +0000 (0:00:00.427) 0:09:33.622 **** 2025-09-20 10:55:51.026123 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.026129 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.026133 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.026138 | orchestrator | 2025-09-20 10:55:51.026143 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 10:55:51.026147 | orchestrator | Saturday 20 September 2025 10:54:50 +0000 (0:00:00.306) 0:09:33.929 **** 2025-09-20 10:55:51.026152 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.026157 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.026161 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.026166 | orchestrator | 2025-09-20 10:55:51.026170 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-20 10:55:51.026175 | orchestrator | Saturday 20 September 2025 10:54:51 +0000 (0:00:00.506) 0:09:34.435 **** 2025-09-20 10:55:51.026180 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.026184 | orchestrator | 2025-09-20 10:55:51.026189 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-20 10:55:51.026194 | orchestrator | Saturday 20 September 2025 10:54:51 +0000 (0:00:00.654) 0:09:35.090 **** 2025-09-20 10:55:51.026198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026203 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 10:55:51.026213 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:55:51.026218 | orchestrator | 2025-09-20 10:55:51.026223 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-20 10:55:51.026228 | orchestrator | Saturday 20 September 2025 10:54:53 +0000 (0:00:02.000) 0:09:37.091 **** 2025-09-20 10:55:51.026232 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:55:51.026237 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 10:55:51.026241 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.026246 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:55:51.026251 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-20 10:55:51.026255 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.026264 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:55:51.026269 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-20 10:55:51.026273 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.026278 | orchestrator | 2025-09-20 10:55:51.026283 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-20 10:55:51.026287 | orchestrator | Saturday 20 September 2025 10:54:54 +0000 (0:00:01.121) 0:09:38.212 **** 2025-09-20 10:55:51.026292 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026297 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026301 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026306 | orchestrator | 2025-09-20 10:55:51.026310 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-20 10:55:51.026315 | orchestrator | Saturday 20 September 2025 10:54:55 +0000 (0:00:00.266) 0:09:38.478 **** 2025-09-20 10:55:51.026319 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.026324 | orchestrator | 2025-09-20 10:55:51.026329 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-20 10:55:51.026333 | orchestrator | Saturday 20 September 2025 10:54:55 +0000 (0:00:00.623) 0:09:39.102 **** 2025-09-20 10:55:51.026338 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.026343 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.026347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.026352 | orchestrator | 2025-09-20 10:55:51.026356 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-20 10:55:51.026361 | orchestrator | Saturday 20 September 2025 10:54:56 +0000 (0:00:00.740) 0:09:39.843 **** 2025-09-20 10:55:51.026366 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026370 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-20 10:55:51.026375 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026380 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-20 10:55:51.026384 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026389 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-20 10:55:51.026394 | orchestrator | 2025-09-20 10:55:51.026398 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-20 10:55:51.026403 | orchestrator | Saturday 20 September 2025 10:55:00 +0000 (0:00:04.272) 0:09:44.115 **** 2025-09-20 10:55:51.026407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026412 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:55:51.026417 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026421 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:55:51.026426 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:55:51.026430 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:55:51.026435 | orchestrator | 2025-09-20 10:55:51.026440 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-20 10:55:51.026444 | orchestrator | Saturday 20 September 2025 10:55:03 +0000 (0:00:02.706) 0:09:46.821 **** 2025-09-20 10:55:51.026451 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:55:51.026456 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.026461 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:55:51.026465 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.026470 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:55:51.026475 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.026479 | orchestrator | 2025-09-20 10:55:51.026484 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-20 10:55:51.026488 | orchestrator | Saturday 20 September 2025 10:55:04 +0000 (0:00:01.175) 0:09:47.997 **** 2025-09-20 10:55:51.026493 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-20 10:55:51.026498 | orchestrator | 2025-09-20 10:55:51.026502 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-20 10:55:51.026511 | orchestrator | Saturday 20 September 2025 10:55:04 +0000 (0:00:00.248) 0:09:48.245 **** 2025-09-20 10:55:51.026516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026540 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026544 | orchestrator | 2025-09-20 10:55:51.026549 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-20 10:55:51.026554 | orchestrator | Saturday 20 September 2025 10:55:05 +0000 (0:00:00.593) 0:09:48.838 **** 2025-09-20 10:55:51.026558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 10:55:51.026582 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026586 | orchestrator | 2025-09-20 10:55:51.026591 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-20 10:55:51.026595 | orchestrator | Saturday 20 September 2025 10:55:06 +0000 (0:00:00.655) 0:09:49.494 **** 2025-09-20 10:55:51.026600 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 10:55:51.026605 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 10:55:51.026609 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 10:55:51.026614 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 10:55:51.026622 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 10:55:51.026626 | orchestrator | 2025-09-20 10:55:51.026631 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-20 10:55:51.026635 | orchestrator | Saturday 20 September 2025 10:55:36 +0000 (0:00:29.871) 0:10:19.366 **** 2025-09-20 10:55:51.026639 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026643 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026647 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026651 | orchestrator | 2025-09-20 10:55:51.026656 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-20 10:55:51.026660 | orchestrator | Saturday 20 September 2025 10:55:36 +0000 (0:00:00.307) 0:10:19.674 **** 2025-09-20 10:55:51.026664 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026668 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026672 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026676 | orchestrator | 2025-09-20 10:55:51.026681 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-20 10:55:51.026685 | orchestrator | Saturday 20 September 2025 10:55:36 +0000 (0:00:00.591) 0:10:20.265 **** 2025-09-20 10:55:51.026689 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.026693 | orchestrator | 2025-09-20 10:55:51.026697 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-20 10:55:51.026701 | orchestrator | Saturday 20 September 2025 10:55:37 +0000 (0:00:00.598) 0:10:20.864 **** 2025-09-20 10:55:51.026706 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.026710 | orchestrator | 2025-09-20 10:55:51.026714 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-20 10:55:51.026718 | orchestrator | Saturday 20 September 2025 10:55:38 +0000 (0:00:00.782) 0:10:21.647 **** 2025-09-20 10:55:51.026722 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.026727 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.026731 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.026735 | orchestrator | 2025-09-20 10:55:51.026743 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-20 10:55:51.026748 | orchestrator | Saturday 20 September 2025 10:55:39 +0000 (0:00:01.309) 0:10:22.957 **** 2025-09-20 10:55:51.026752 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.026756 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.026760 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.026764 | orchestrator | 2025-09-20 10:55:51.026769 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-20 10:55:51.026773 | orchestrator | Saturday 20 September 2025 10:55:40 +0000 (0:00:01.139) 0:10:24.096 **** 2025-09-20 10:55:51.026777 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:55:51.026781 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:55:51.026785 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:55:51.026790 | orchestrator | 2025-09-20 10:55:51.026794 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-20 10:55:51.026798 | orchestrator | Saturday 20 September 2025 10:55:42 +0000 (0:00:01.711) 0:10:25.808 **** 2025-09-20 10:55:51.026802 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.026806 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.026811 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 10:55:51.026819 | orchestrator | 2025-09-20 10:55:51.026823 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 10:55:51.026828 | orchestrator | Saturday 20 September 2025 10:55:45 +0000 (0:00:02.529) 0:10:28.337 **** 2025-09-20 10:55:51.026832 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026836 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026840 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026844 | orchestrator | 2025-09-20 10:55:51.026849 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-20 10:55:51.026853 | orchestrator | Saturday 20 September 2025 10:55:45 +0000 (0:00:00.380) 0:10:28.718 **** 2025-09-20 10:55:51.026857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:55:51.026861 | orchestrator | 2025-09-20 10:55:51.026865 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-20 10:55:51.026870 | orchestrator | Saturday 20 September 2025 10:55:46 +0000 (0:00:00.863) 0:10:29.581 **** 2025-09-20 10:55:51.026874 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.026878 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.026882 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.026886 | orchestrator | 2025-09-20 10:55:51.026890 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-20 10:55:51.026895 | orchestrator | Saturday 20 September 2025 10:55:46 +0000 (0:00:00.320) 0:10:29.902 **** 2025-09-20 10:55:51.026899 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026903 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:55:51.026907 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:55:51.026911 | orchestrator | 2025-09-20 10:55:51.026915 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-20 10:55:51.026919 | orchestrator | Saturday 20 September 2025 10:55:46 +0000 (0:00:00.345) 0:10:30.247 **** 2025-09-20 10:55:51.026924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:55:51.026928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:55:51.026932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:55:51.026936 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:55:51.026940 | orchestrator | 2025-09-20 10:55:51.026945 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-20 10:55:51.026949 | orchestrator | Saturday 20 September 2025 10:55:48 +0000 (0:00:01.174) 0:10:31.422 **** 2025-09-20 10:55:51.026953 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:55:51.026957 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:55:51.026961 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:55:51.026965 | orchestrator | 2025-09-20 10:55:51.026970 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:55:51.026974 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-20 10:55:51.026978 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-20 10:55:51.026982 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-20 10:55:51.026987 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-20 10:55:51.026991 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-20 10:55:51.026995 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-20 10:55:51.026999 | orchestrator | 2025-09-20 10:55:51.027006 | orchestrator | 2025-09-20 10:55:51.027010 | orchestrator | 2025-09-20 10:55:51.027014 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:55:51.027019 | orchestrator | Saturday 20 September 2025 10:55:48 +0000 (0:00:00.264) 0:10:31.687 **** 2025-09-20 10:55:51.027027 | orchestrator | =============================================================================== 2025-09-20 10:55:51.027031 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 49.53s 2025-09-20 10:55:51.027036 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.08s 2025-09-20 10:55:51.027040 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.87s 2025-09-20 10:55:51.027044 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.52s 2025-09-20 10:55:51.027048 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.79s 2025-09-20 10:55:51.027052 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 12.80s 2025-09-20 10:55:51.027056 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2025-09-20 10:55:51.027060 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.69s 2025-09-20 10:55:51.027064 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.64s 2025-09-20 10:55:51.027069 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.46s 2025-09-20 10:55:51.027073 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.38s 2025-09-20 10:55:51.027077 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.27s 2025-09-20 10:55:51.027081 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.52s 2025-09-20 10:55:51.027085 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.27s 2025-09-20 10:55:51.027089 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.05s 2025-09-20 10:55:51.027093 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.67s 2025-09-20 10:55:51.027097 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.52s 2025-09-20 10:55:51.027239 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.42s 2025-09-20 10:55:51.027243 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.23s 2025-09-20 10:55:51.027247 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.11s 2025-09-20 10:55:51.027251 | orchestrator | 2025-09-20 10:55:50 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:55:51.027256 | orchestrator | 2025-09-20 10:55:51 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:51.027260 | orchestrator | 2025-09-20 10:55:51 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:51.027264 | orchestrator | 2025-09-20 10:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:54.067290 | orchestrator | 2025-09-20 10:55:54 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:55:54.072797 | orchestrator | 2025-09-20 10:55:54 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:54.076693 | orchestrator | 2025-09-20 10:55:54 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:54.076725 | orchestrator | 2025-09-20 10:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:55:57.122664 | orchestrator | 2025-09-20 10:55:57 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:55:57.123716 | orchestrator | 2025-09-20 10:55:57 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:55:57.125261 | orchestrator | 2025-09-20 10:55:57 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:55:57.125313 | orchestrator | 2025-09-20 10:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:00.180246 | orchestrator | 2025-09-20 10:56:00 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:00.182502 | orchestrator | 2025-09-20 10:56:00 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:00.184268 | orchestrator | 2025-09-20 10:56:00 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:00.184614 | orchestrator | 2025-09-20 10:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:03.236322 | orchestrator | 2025-09-20 10:56:03 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:03.239292 | orchestrator | 2025-09-20 10:56:03 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:03.242368 | orchestrator | 2025-09-20 10:56:03 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:03.242411 | orchestrator | 2025-09-20 10:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:06.286734 | orchestrator | 2025-09-20 10:56:06 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:06.288889 | orchestrator | 2025-09-20 10:56:06 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:06.291679 | orchestrator | 2025-09-20 10:56:06 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:06.292294 | orchestrator | 2025-09-20 10:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:09.334334 | orchestrator | 2025-09-20 10:56:09 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:09.336849 | orchestrator | 2025-09-20 10:56:09 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:09.339380 | orchestrator | 2025-09-20 10:56:09 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:09.339420 | orchestrator | 2025-09-20 10:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:12.392536 | orchestrator | 2025-09-20 10:56:12 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:12.394521 | orchestrator | 2025-09-20 10:56:12 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:12.396735 | orchestrator | 2025-09-20 10:56:12 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:12.397245 | orchestrator | 2025-09-20 10:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:15.451278 | orchestrator | 2025-09-20 10:56:15 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:15.452141 | orchestrator | 2025-09-20 10:56:15 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:15.453550 | orchestrator | 2025-09-20 10:56:15 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:15.453631 | orchestrator | 2025-09-20 10:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:18.502894 | orchestrator | 2025-09-20 10:56:18 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:18.505910 | orchestrator | 2025-09-20 10:56:18 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:18.507002 | orchestrator | 2025-09-20 10:56:18 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:18.507028 | orchestrator | 2025-09-20 10:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:21.571034 | orchestrator | 2025-09-20 10:56:21 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:21.572871 | orchestrator | 2025-09-20 10:56:21 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:21.576943 | orchestrator | 2025-09-20 10:56:21 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:21.576984 | orchestrator | 2025-09-20 10:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:24.626501 | orchestrator | 2025-09-20 10:56:24 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:24.627631 | orchestrator | 2025-09-20 10:56:24 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:24.629896 | orchestrator | 2025-09-20 10:56:24 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:24.630215 | orchestrator | 2025-09-20 10:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:27.686237 | orchestrator | 2025-09-20 10:56:27 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:27.688006 | orchestrator | 2025-09-20 10:56:27 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:27.689781 | orchestrator | 2025-09-20 10:56:27 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:27.689847 | orchestrator | 2025-09-20 10:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:30.758883 | orchestrator | 2025-09-20 10:56:30 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:30.759815 | orchestrator | 2025-09-20 10:56:30 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:30.761201 | orchestrator | 2025-09-20 10:56:30 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:30.761223 | orchestrator | 2025-09-20 10:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:33.804327 | orchestrator | 2025-09-20 10:56:33 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:33.805786 | orchestrator | 2025-09-20 10:56:33 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:33.807436 | orchestrator | 2025-09-20 10:56:33 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:33.808028 | orchestrator | 2025-09-20 10:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:36.857733 | orchestrator | 2025-09-20 10:56:36 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:36.859422 | orchestrator | 2025-09-20 10:56:36 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:36.861481 | orchestrator | 2025-09-20 10:56:36 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:36.861509 | orchestrator | 2025-09-20 10:56:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:39.906684 | orchestrator | 2025-09-20 10:56:39 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:39.908650 | orchestrator | 2025-09-20 10:56:39 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:39.911318 | orchestrator | 2025-09-20 10:56:39 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state STARTED 2025-09-20 10:56:39.911367 | orchestrator | 2025-09-20 10:56:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:42.967602 | orchestrator | 2025-09-20 10:56:42 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:42.969054 | orchestrator | 2025-09-20 10:56:42 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:42.972915 | orchestrator | 2025-09-20 10:56:42.972973 | orchestrator | 2025-09-20 10:56:42.972986 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:56:42.972999 | orchestrator | 2025-09-20 10:56:42.973010 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:56:42.973021 | orchestrator | Saturday 20 September 2025 10:53:52 +0000 (0:00:00.249) 0:00:00.249 **** 2025-09-20 10:56:42.973077 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:56:42.973098 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:56:42.973316 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:56:42.973329 | orchestrator | 2025-09-20 10:56:42.973341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:56:42.973352 | orchestrator | Saturday 20 September 2025 10:53:52 +0000 (0:00:00.267) 0:00:00.517 **** 2025-09-20 10:56:42.973363 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-20 10:56:42.973375 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-20 10:56:42.973386 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-20 10:56:42.973396 | orchestrator | 2025-09-20 10:56:42.973407 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-20 10:56:42.973419 | orchestrator | 2025-09-20 10:56:42.973439 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 10:56:42.973459 | orchestrator | Saturday 20 September 2025 10:53:53 +0000 (0:00:00.365) 0:00:00.882 **** 2025-09-20 10:56:42.973478 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:56:42.973497 | orchestrator | 2025-09-20 10:56:42.973516 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-20 10:56:42.973535 | orchestrator | Saturday 20 September 2025 10:53:53 +0000 (0:00:00.464) 0:00:01.346 **** 2025-09-20 10:56:42.973551 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 10:56:42.973562 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 10:56:42.973572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 10:56:42.973583 | orchestrator | 2025-09-20 10:56:42.973594 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-20 10:56:42.973605 | orchestrator | Saturday 20 September 2025 10:53:54 +0000 (0:00:00.598) 0:00:01.945 **** 2025-09-20 10:56:42.973624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.973663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.973715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.973731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.973746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.973764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.973785 | orchestrator | 2025-09-20 10:56:42.973796 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 10:56:42.973808 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:01.486) 0:00:03.432 **** 2025-09-20 10:56:42.973819 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:56:42.973830 | orchestrator | 2025-09-20 10:56:42.973841 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-20 10:56:42.973853 | orchestrator | Saturday 20 September 2025 10:53:56 +0000 (0:00:00.577) 0:00:04.009 **** 2025-09-20 10:56:42.973875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.973888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.973900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.973917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.973945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.973960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.973973 | orchestrator | 2025-09-20 10:56:42.973986 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-20 10:56:42.973998 | orchestrator | Saturday 20 September 2025 10:53:58 +0000 (0:00:02.649) 0:00:06.659 **** 2025-09-20 10:56:42.974011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:56:42.974115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:56:42.974138 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:56:42.974152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:56:42.974176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:56:42.974190 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:56:42.974203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:56:42.974222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:56:42.974242 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:56:42.974254 | orchestrator | 2025-09-20 10:56:42.974266 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-20 10:56:42.974278 | orchestrator | Saturday 20 September 2025 10:54:00 +0000 (0:00:01.367) 0:00:08.027 **** 2025-09-20 10:56:42.974289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:56:42.974310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:56:42.974322 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:56:42.974334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:56:42.974358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:56:42.974370 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:56:42.974382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 10:56:42.974402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 10:56:42.974414 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:56:42.974433 | orchestrator | 2025-09-20 10:56:42.974452 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-20 10:56:42.974471 | orchestrator | Saturday 20 September 2025 10:54:01 +0000 (0:00:01.128) 0:00:09.155 **** 2025-09-20 10:56:42.974489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.974521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.974552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.974575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.974588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.974601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.974619 | orchestrator | 2025-09-20 10:56:42.974631 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-20 10:56:42.974652 | orchestrator | Saturday 20 September 2025 10:54:03 +0000 (0:00:02.531) 0:00:11.687 **** 2025-09-20 10:56:42.974664 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:56:42.974675 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:56:42.974686 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:56:42.974727 | orchestrator | 2025-09-20 10:56:42.974739 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-20 10:56:42.974750 | orchestrator | Saturday 20 September 2025 10:54:07 +0000 (0:00:03.096) 0:00:14.783 **** 2025-09-20 10:56:42.974761 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:56:42.974778 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:56:42.974798 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:56:42.974817 | orchestrator | 2025-09-20 10:56:42.974834 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-20 10:56:42.974851 | orchestrator | Saturday 20 September 2025 10:54:08 +0000 (0:00:01.691) 0:00:16.474 **** 2025-09-20 10:56:42.974871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.974903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'co2025-09-20 10:56:42 | INFO  | Task 9d5b5482-6b65-40b3-8862-bbfee041f556 is in state SUCCESS 2025-09-20 10:56:42.974916 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.974929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 10:56:42.974956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.974969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.974989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 10:56:42.975008 | orchestrator | 2025-09-20 10:56:42.975019 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 10:56:42.975058 | orchestrator | Saturday 20 September 2025 10:54:10 +0000 (0:00:02.016) 0:00:18.491 **** 2025-09-20 10:56:42.975077 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:56:42.975088 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:56:42.975099 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:56:42.975110 | orchestrator | 2025-09-20 10:56:42.975121 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-20 10:56:42.975132 | orchestrator | Saturday 20 September 2025 10:54:11 +0000 (0:00:00.310) 0:00:18.801 **** 2025-09-20 10:56:42.975142 | orchestrator | 2025-09-20 10:56:42.975153 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-20 10:56:42.975164 | orchestrator | Saturday 20 September 2025 10:54:11 +0000 (0:00:00.059) 0:00:18.860 **** 2025-09-20 10:56:42.975175 | orchestrator | 2025-09-20 10:56:42.975186 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-20 10:56:42.975196 | orchestrator | Saturday 20 September 2025 10:54:11 +0000 (0:00:00.069) 0:00:18.929 **** 2025-09-20 10:56:42.975207 | orchestrator | 2025-09-20 10:56:42.975218 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-20 10:56:42.975229 | orchestrator | Saturday 20 September 2025 10:54:11 +0000 (0:00:00.066) 0:00:18.995 **** 2025-09-20 10:56:42.975240 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:56:42.975251 | orchestrator | 2025-09-20 10:56:42.975261 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-20 10:56:42.975272 | orchestrator | Saturday 20 September 2025 10:54:11 +0000 (0:00:00.208) 0:00:19.204 **** 2025-09-20 10:56:42.975283 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:56:42.975294 | orchestrator | 2025-09-20 10:56:42.975305 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-20 10:56:42.975316 | orchestrator | Saturday 20 September 2025 10:54:12 +0000 (0:00:00.648) 0:00:19.852 **** 2025-09-20 10:56:42.975326 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:56:42.975337 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:56:42.975348 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:56:42.975359 | orchestrator | 2025-09-20 10:56:42.975370 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-20 10:56:42.975380 | orchestrator | Saturday 20 September 2025 10:55:14 +0000 (0:01:02.343) 0:01:22.196 **** 2025-09-20 10:56:42.975391 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:56:42.975402 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:56:42.975413 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:56:42.975428 | orchestrator | 2025-09-20 10:56:42.975447 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 10:56:42.975471 | orchestrator | Saturday 20 September 2025 10:56:32 +0000 (0:01:17.982) 0:02:40.178 **** 2025-09-20 10:56:42.975491 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:56:42.975510 | orchestrator | 2025-09-20 10:56:42.975529 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-20 10:56:42.975541 | orchestrator | Saturday 20 September 2025 10:56:32 +0000 (0:00:00.480) 0:02:40.659 **** 2025-09-20 10:56:42.975552 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:56:42.975562 | orchestrator | 2025-09-20 10:56:42.975573 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-20 10:56:42.975584 | orchestrator | Saturday 20 September 2025 10:56:35 +0000 (0:00:02.473) 0:02:43.132 **** 2025-09-20 10:56:42.975594 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:56:42.975605 | orchestrator | 2025-09-20 10:56:42.975616 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-20 10:56:42.975626 | orchestrator | Saturday 20 September 2025 10:56:37 +0000 (0:00:02.084) 0:02:45.217 **** 2025-09-20 10:56:42.975637 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:56:42.975648 | orchestrator | 2025-09-20 10:56:42.975667 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-20 10:56:42.975678 | orchestrator | Saturday 20 September 2025 10:56:40 +0000 (0:00:02.567) 0:02:47.784 **** 2025-09-20 10:56:42.975688 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:56:42.975700 | orchestrator | 2025-09-20 10:56:42.975710 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:56:42.975722 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:56:42.975735 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:56:42.975754 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:56:42.975766 | orchestrator | 2025-09-20 10:56:42.975777 | orchestrator | 2025-09-20 10:56:42.975787 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:56:42.975798 | orchestrator | Saturday 20 September 2025 10:56:42 +0000 (0:00:02.392) 0:02:50.177 **** 2025-09-20 10:56:42.975809 | orchestrator | =============================================================================== 2025-09-20 10:56:42.975820 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.98s 2025-09-20 10:56:42.975831 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.34s 2025-09-20 10:56:42.975842 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.10s 2025-09-20 10:56:42.975852 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.65s 2025-09-20 10:56:42.975863 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.57s 2025-09-20 10:56:42.975874 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.53s 2025-09-20 10:56:42.975884 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.47s 2025-09-20 10:56:42.975895 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2025-09-20 10:56:42.975906 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.08s 2025-09-20 10:56:42.975917 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.02s 2025-09-20 10:56:42.975927 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.69s 2025-09-20 10:56:42.975938 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.49s 2025-09-20 10:56:42.975949 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.37s 2025-09-20 10:56:42.975960 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.13s 2025-09-20 10:56:42.975971 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2025-09-20 10:56:42.975981 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.60s 2025-09-20 10:56:42.975992 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-09-20 10:56:42.976003 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-09-20 10:56:42.976013 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-09-20 10:56:42.976024 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2025-09-20 10:56:42.976061 | orchestrator | 2025-09-20 10:56:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:46.018590 | orchestrator | 2025-09-20 10:56:46 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:46.020301 | orchestrator | 2025-09-20 10:56:46 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:46.020334 | orchestrator | 2025-09-20 10:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:49.063659 | orchestrator | 2025-09-20 10:56:49 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:49.064834 | orchestrator | 2025-09-20 10:56:49 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:49.064889 | orchestrator | 2025-09-20 10:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:52.107219 | orchestrator | 2025-09-20 10:56:52 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:52.108544 | orchestrator | 2025-09-20 10:56:52 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:52.108605 | orchestrator | 2025-09-20 10:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:55.143927 | orchestrator | 2025-09-20 10:56:55 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:55.144408 | orchestrator | 2025-09-20 10:56:55 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:55.144441 | orchestrator | 2025-09-20 10:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:56:58.192908 | orchestrator | 2025-09-20 10:56:58 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:56:58.194520 | orchestrator | 2025-09-20 10:56:58 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state STARTED 2025-09-20 10:56:58.194567 | orchestrator | 2025-09-20 10:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:01.252883 | orchestrator | 2025-09-20 10:57:01 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:01.259399 | orchestrator | 2025-09-20 10:57:01 | INFO  | Task c9dac8d4-4512-4218-b1f7-15693de93c4b is in state SUCCESS 2025-09-20 10:57:01.260761 | orchestrator | 2025-09-20 10:57:01.260802 | orchestrator | 2025-09-20 10:57:01.260816 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-20 10:57:01.260829 | orchestrator | 2025-09-20 10:57:01.260841 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-20 10:57:01.260853 | orchestrator | Saturday 20 September 2025 10:53:52 +0000 (0:00:00.089) 0:00:00.089 **** 2025-09-20 10:57:01.260864 | orchestrator | ok: [localhost] => { 2025-09-20 10:57:01.260877 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-20 10:57:01.260889 | orchestrator | } 2025-09-20 10:57:01.260901 | orchestrator | 2025-09-20 10:57:01.260912 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-20 10:57:01.260924 | orchestrator | Saturday 20 September 2025 10:53:52 +0000 (0:00:00.048) 0:00:00.138 **** 2025-09-20 10:57:01.260935 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-20 10:57:01.260948 | orchestrator | ...ignoring 2025-09-20 10:57:01.260960 | orchestrator | 2025-09-20 10:57:01.261230 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-20 10:57:01.261251 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:02.785) 0:00:02.923 **** 2025-09-20 10:57:01.261263 | orchestrator | skipping: [localhost] 2025-09-20 10:57:01.261274 | orchestrator | 2025-09-20 10:57:01.261285 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-20 10:57:01.261297 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:00.046) 0:00:02.969 **** 2025-09-20 10:57:01.261307 | orchestrator | ok: [localhost] 2025-09-20 10:57:01.261319 | orchestrator | 2025-09-20 10:57:01.261330 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:57:01.261342 | orchestrator | 2025-09-20 10:57:01.261353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:57:01.261393 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:00.160) 0:00:03.129 **** 2025-09-20 10:57:01.261405 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.261416 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.261427 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.261438 | orchestrator | 2025-09-20 10:57:01.261449 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:57:01.261460 | orchestrator | Saturday 20 September 2025 10:53:55 +0000 (0:00:00.305) 0:00:03.435 **** 2025-09-20 10:57:01.261471 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-20 10:57:01.261483 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-20 10:57:01.261495 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-20 10:57:01.261506 | orchestrator | 2025-09-20 10:57:01.261516 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-20 10:57:01.261527 | orchestrator | 2025-09-20 10:57:01.261538 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-20 10:57:01.261549 | orchestrator | Saturday 20 September 2025 10:53:56 +0000 (0:00:00.601) 0:00:04.036 **** 2025-09-20 10:57:01.261560 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 10:57:01.261572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-20 10:57:01.261582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-20 10:57:01.261594 | orchestrator | 2025-09-20 10:57:01.261604 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 10:57:01.261616 | orchestrator | Saturday 20 September 2025 10:53:56 +0000 (0:00:00.380) 0:00:04.417 **** 2025-09-20 10:57:01.261626 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:57:01.261638 | orchestrator | 2025-09-20 10:57:01.261649 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-20 10:57:01.261660 | orchestrator | Saturday 20 September 2025 10:53:57 +0000 (0:00:00.600) 0:00:05.018 **** 2025-09-20 10:57:01.261707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.261726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.261758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.261771 | orchestrator | 2025-09-20 10:57:01.261792 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-20 10:57:01.261803 | orchestrator | Saturday 20 September 2025 10:54:00 +0000 (0:00:03.546) 0:00:08.564 **** 2025-09-20 10:57:01.261814 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.261826 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.261837 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.261848 | orchestrator | 2025-09-20 10:57:01.261860 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-20 10:57:01.261880 | orchestrator | Saturday 20 September 2025 10:54:01 +0000 (0:00:00.675) 0:00:09.240 **** 2025-09-20 10:57:01.261892 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.261905 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.261917 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.261931 | orchestrator | 2025-09-20 10:57:01.261943 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-20 10:57:01.261956 | orchestrator | Saturday 20 September 2025 10:54:03 +0000 (0:00:01.581) 0:00:10.821 **** 2025-09-20 10:57:01.261970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.261997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.262103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.262121 | orchestrator | 2025-09-20 10:57:01.262217 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-20 10:57:01.262234 | orchestrator | Saturday 20 September 2025 10:54:07 +0000 (0:00:03.864) 0:00:14.685 **** 2025-09-20 10:57:01.262246 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.262257 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.262268 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.262279 | orchestrator | 2025-09-20 10:57:01.262290 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-20 10:57:01.262301 | orchestrator | Saturday 20 September 2025 10:54:08 +0000 (0:00:01.020) 0:00:15.706 **** 2025-09-20 10:57:01.262312 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.262323 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:57:01.262334 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:57:01.262345 | orchestrator | 2025-09-20 10:57:01.262356 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 10:57:01.262368 | orchestrator | Saturday 20 September 2025 10:54:12 +0000 (0:00:04.140) 0:00:19.847 **** 2025-09-20 10:57:01.262385 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:57:01.262396 | orchestrator | 2025-09-20 10:57:01.262407 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-20 10:57:01.262418 | orchestrator | Saturday 20 September 2025 10:54:12 +0000 (0:00:00.509) 0:00:20.357 **** 2025-09-20 10:57:01.262442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262463 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.262476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262488 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.262512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262532 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.262543 | orchestrator | 2025-09-20 10:57:01.262554 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-20 10:57:01.262565 | orchestrator | Saturday 20 September 2025 10:54:15 +0000 (0:00:03.014) 0:00:23.371 **** 2025-09-20 10:57:01.262577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262589 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.262612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262632 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.262644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262655 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.262666 | orchestrator | 2025-09-20 10:57:01.262678 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-20 10:57:01.262688 | orchestrator | Saturday 20 September 2025 10:54:18 +0000 (0:00:02.321) 0:00:25.692 **** 2025-09-20 10:57:01.262706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262732 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.262752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262765 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.262782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 10:57:01.262800 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.262811 | orchestrator | 2025-09-20 10:57:01.262822 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-20 10:57:01.262833 | orchestrator | Saturday 20 September 2025 10:54:21 +0000 (0:00:03.135) 0:00:28.827 **** 2025-09-20 10:57:01.262854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.262872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.262900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 10:57:01.262914 | orchestrator | 2025-09-20 10:57:01.262925 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-20 10:57:01.262936 | orchestrator | Saturday 20 September 2025 10:54:24 +0000 (0:00:03.311) 0:00:32.139 **** 2025-09-20 10:57:01.262947 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.262958 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:57:01.262969 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:57:01.262980 | orchestrator | 2025-09-20 10:57:01.262991 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-20 10:57:01.263002 | orchestrator | Saturday 20 September 2025 10:54:25 +0000 (0:00:00.959) 0:00:33.098 **** 2025-09-20 10:57:01.263013 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.263047 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.263058 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.263069 | orchestrator | 2025-09-20 10:57:01.263081 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-20 10:57:01.263092 | orchestrator | Saturday 20 September 2025 10:54:26 +0000 (0:00:00.667) 0:00:33.766 **** 2025-09-20 10:57:01.263103 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.263114 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.263125 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.263136 | orchestrator | 2025-09-20 10:57:01.263147 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-20 10:57:01.263158 | orchestrator | Saturday 20 September 2025 10:54:26 +0000 (0:00:00.362) 0:00:34.129 **** 2025-09-20 10:57:01.263170 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-20 10:57:01.263189 | orchestrator | ...ignoring 2025-09-20 10:57:01.263201 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-20 10:57:01.263212 | orchestrator | ...ignoring 2025-09-20 10:57:01.263223 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-20 10:57:01.263234 | orchestrator | ...ignoring 2025-09-20 10:57:01.263245 | orchestrator | 2025-09-20 10:57:01.263257 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-20 10:57:01.263268 | orchestrator | Saturday 20 September 2025 10:54:37 +0000 (0:00:10.903) 0:00:45.032 **** 2025-09-20 10:57:01.263279 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.263290 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.263306 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.263317 | orchestrator | 2025-09-20 10:57:01.263328 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-20 10:57:01.263339 | orchestrator | Saturday 20 September 2025 10:54:37 +0000 (0:00:00.445) 0:00:45.477 **** 2025-09-20 10:57:01.263350 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.263362 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.263373 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.263384 | orchestrator | 2025-09-20 10:57:01.263395 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-20 10:57:01.263406 | orchestrator | Saturday 20 September 2025 10:54:38 +0000 (0:00:00.699) 0:00:46.177 **** 2025-09-20 10:57:01.263417 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.263428 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.263439 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.263450 | orchestrator | 2025-09-20 10:57:01.263461 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-20 10:57:01.263473 | orchestrator | Saturday 20 September 2025 10:54:38 +0000 (0:00:00.436) 0:00:46.613 **** 2025-09-20 10:57:01.263484 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.263494 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.263506 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.263516 | orchestrator | 2025-09-20 10:57:01.263527 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-20 10:57:01.263539 | orchestrator | Saturday 20 September 2025 10:54:39 +0000 (0:00:00.465) 0:00:47.079 **** 2025-09-20 10:57:01.263550 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.263561 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.263572 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.263583 | orchestrator | 2025-09-20 10:57:01.263594 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-20 10:57:01.263605 | orchestrator | Saturday 20 September 2025 10:54:39 +0000 (0:00:00.460) 0:00:47.539 **** 2025-09-20 10:57:01.263622 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.263634 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.263645 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.263656 | orchestrator | 2025-09-20 10:57:01.263667 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 10:57:01.263678 | orchestrator | Saturday 20 September 2025 10:54:40 +0000 (0:00:00.725) 0:00:48.265 **** 2025-09-20 10:57:01.263689 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.263700 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.263711 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-20 10:57:01.263722 | orchestrator | 2025-09-20 10:57:01.263733 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-20 10:57:01.263745 | orchestrator | Saturday 20 September 2025 10:54:40 +0000 (0:00:00.391) 0:00:48.656 **** 2025-09-20 10:57:01.263762 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.263773 | orchestrator | 2025-09-20 10:57:01.263784 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-20 10:57:01.263795 | orchestrator | Saturday 20 September 2025 10:54:50 +0000 (0:00:09.807) 0:00:58.464 **** 2025-09-20 10:57:01.263806 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.263817 | orchestrator | 2025-09-20 10:57:01.263828 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 10:57:01.263839 | orchestrator | Saturday 20 September 2025 10:54:50 +0000 (0:00:00.108) 0:00:58.573 **** 2025-09-20 10:57:01.263850 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.263861 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.263872 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.263883 | orchestrator | 2025-09-20 10:57:01.263894 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-20 10:57:01.263905 | orchestrator | Saturday 20 September 2025 10:54:51 +0000 (0:00:00.866) 0:00:59.439 **** 2025-09-20 10:57:01.263916 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.263927 | orchestrator | 2025-09-20 10:57:01.264135 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-20 10:57:01.264154 | orchestrator | Saturday 20 September 2025 10:54:58 +0000 (0:00:07.144) 0:01:06.583 **** 2025-09-20 10:57:01.264166 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.264177 | orchestrator | 2025-09-20 10:57:01.264188 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-20 10:57:01.264199 | orchestrator | Saturday 20 September 2025 10:55:00 +0000 (0:00:01.559) 0:01:08.143 **** 2025-09-20 10:57:01.264210 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.264221 | orchestrator | 2025-09-20 10:57:01.264232 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-20 10:57:01.264243 | orchestrator | Saturday 20 September 2025 10:55:02 +0000 (0:00:02.282) 0:01:10.426 **** 2025-09-20 10:57:01.264254 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.264264 | orchestrator | 2025-09-20 10:57:01.264275 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-20 10:57:01.264286 | orchestrator | Saturday 20 September 2025 10:55:02 +0000 (0:00:00.134) 0:01:10.560 **** 2025-09-20 10:57:01.264296 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.264306 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.264315 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.264325 | orchestrator | 2025-09-20 10:57:01.264335 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-20 10:57:01.264344 | orchestrator | Saturday 20 September 2025 10:55:03 +0000 (0:00:00.328) 0:01:10.889 **** 2025-09-20 10:57:01.264354 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.264364 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-20 10:57:01.264374 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:57:01.264383 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:57:01.264393 | orchestrator | 2025-09-20 10:57:01.264403 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-20 10:57:01.264413 | orchestrator | skipping: no hosts matched 2025-09-20 10:57:01.264423 | orchestrator | 2025-09-20 10:57:01.264432 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-20 10:57:01.264442 | orchestrator | 2025-09-20 10:57:01.264452 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-20 10:57:01.264467 | orchestrator | Saturday 20 September 2025 10:55:03 +0000 (0:00:00.589) 0:01:11.479 **** 2025-09-20 10:57:01.264477 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:57:01.264487 | orchestrator | 2025-09-20 10:57:01.264497 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-20 10:57:01.264506 | orchestrator | Saturday 20 September 2025 10:55:27 +0000 (0:00:23.708) 0:01:35.188 **** 2025-09-20 10:57:01.264516 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.264534 | orchestrator | 2025-09-20 10:57:01.264544 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-20 10:57:01.264554 | orchestrator | Saturday 20 September 2025 10:55:43 +0000 (0:00:15.533) 0:01:50.721 **** 2025-09-20 10:57:01.264564 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.264573 | orchestrator | 2025-09-20 10:57:01.264583 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-20 10:57:01.264593 | orchestrator | 2025-09-20 10:57:01.264602 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-20 10:57:01.264612 | orchestrator | Saturday 20 September 2025 10:55:45 +0000 (0:00:02.202) 0:01:52.924 **** 2025-09-20 10:57:01.264622 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:57:01.264631 | orchestrator | 2025-09-20 10:57:01.264641 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-20 10:57:01.264651 | orchestrator | Saturday 20 September 2025 10:56:04 +0000 (0:00:18.775) 0:02:11.700 **** 2025-09-20 10:57:01.264660 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.264670 | orchestrator | 2025-09-20 10:57:01.264680 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-20 10:57:01.264689 | orchestrator | Saturday 20 September 2025 10:56:24 +0000 (0:00:20.595) 0:02:32.295 **** 2025-09-20 10:57:01.264699 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.264708 | orchestrator | 2025-09-20 10:57:01.264718 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-20 10:57:01.264728 | orchestrator | 2025-09-20 10:57:01.264745 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-20 10:57:01.264755 | orchestrator | Saturday 20 September 2025 10:56:27 +0000 (0:00:02.562) 0:02:34.857 **** 2025-09-20 10:57:01.264765 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.264776 | orchestrator | 2025-09-20 10:57:01.264787 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-20 10:57:01.264798 | orchestrator | Saturday 20 September 2025 10:56:43 +0000 (0:00:15.939) 0:02:50.797 **** 2025-09-20 10:57:01.264809 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.264819 | orchestrator | 2025-09-20 10:57:01.264830 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-20 10:57:01.264841 | orchestrator | Saturday 20 September 2025 10:56:43 +0000 (0:00:00.544) 0:02:51.342 **** 2025-09-20 10:57:01.264852 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.264863 | orchestrator | 2025-09-20 10:57:01.264874 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-20 10:57:01.264885 | orchestrator | 2025-09-20 10:57:01.264895 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-20 10:57:01.264907 | orchestrator | Saturday 20 September 2025 10:56:46 +0000 (0:00:02.753) 0:02:54.095 **** 2025-09-20 10:57:01.264916 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:57:01.264926 | orchestrator | 2025-09-20 10:57:01.264936 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-20 10:57:01.264946 | orchestrator | Saturday 20 September 2025 10:56:47 +0000 (0:00:00.582) 0:02:54.678 **** 2025-09-20 10:57:01.264955 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.264965 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.264975 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.264985 | orchestrator | 2025-09-20 10:57:01.264995 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-20 10:57:01.265004 | orchestrator | Saturday 20 September 2025 10:56:49 +0000 (0:00:02.050) 0:02:56.728 **** 2025-09-20 10:57:01.265014 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.265043 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.265053 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.265063 | orchestrator | 2025-09-20 10:57:01.265073 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-20 10:57:01.265083 | orchestrator | Saturday 20 September 2025 10:56:51 +0000 (0:00:02.125) 0:02:58.854 **** 2025-09-20 10:57:01.265099 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.265109 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.265119 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.265128 | orchestrator | 2025-09-20 10:57:01.265138 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-20 10:57:01.265148 | orchestrator | Saturday 20 September 2025 10:56:53 +0000 (0:00:02.053) 0:03:00.908 **** 2025-09-20 10:57:01.265158 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.265168 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.265177 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:57:01.265187 | orchestrator | 2025-09-20 10:57:01.265197 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-20 10:57:01.265207 | orchestrator | Saturday 20 September 2025 10:56:55 +0000 (0:00:01.997) 0:03:02.906 **** 2025-09-20 10:57:01.265217 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:57:01.265227 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:57:01.265237 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:57:01.265246 | orchestrator | 2025-09-20 10:57:01.265256 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-20 10:57:01.265266 | orchestrator | Saturday 20 September 2025 10:56:58 +0000 (0:00:02.908) 0:03:05.814 **** 2025-09-20 10:57:01.265276 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:57:01.265285 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:57:01.265295 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:57:01.265305 | orchestrator | 2025-09-20 10:57:01.265315 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:57:01.265325 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-20 10:57:01.265340 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-20 10:57:01.265352 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-20 10:57:01.265362 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-20 10:57:01.265372 | orchestrator | 2025-09-20 10:57:01.265382 | orchestrator | 2025-09-20 10:57:01.265392 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:57:01.265401 | orchestrator | Saturday 20 September 2025 10:56:58 +0000 (0:00:00.463) 0:03:06.278 **** 2025-09-20 10:57:01.265411 | orchestrator | =============================================================================== 2025-09-20 10:57:01.265421 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.48s 2025-09-20 10:57:01.265430 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.13s 2025-09-20 10:57:01.265440 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.94s 2025-09-20 10:57:01.265449 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-09-20 10:57:01.265459 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.81s 2025-09-20 10:57:01.265469 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.14s 2025-09-20 10:57:01.265484 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.77s 2025-09-20 10:57:01.265494 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.14s 2025-09-20 10:57:01.265503 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.86s 2025-09-20 10:57:01.265513 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.55s 2025-09-20 10:57:01.265523 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.31s 2025-09-20 10:57:01.265532 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.14s 2025-09-20 10:57:01.265548 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.01s 2025-09-20 10:57:01.265557 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.91s 2025-09-20 10:57:01.265567 | orchestrator | Check MariaDB service --------------------------------------------------- 2.79s 2025-09-20 10:57:01.265577 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.75s 2025-09-20 10:57:01.265586 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.32s 2025-09-20 10:57:01.265596 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.28s 2025-09-20 10:57:01.265606 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.13s 2025-09-20 10:57:01.265615 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.05s 2025-09-20 10:57:01.265625 | orchestrator | 2025-09-20 10:57:01 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:01.265635 | orchestrator | 2025-09-20 10:57:01 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:01.265645 | orchestrator | 2025-09-20 10:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:04.305256 | orchestrator | 2025-09-20 10:57:04 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:04.306219 | orchestrator | 2025-09-20 10:57:04 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:04.307988 | orchestrator | 2025-09-20 10:57:04 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:04.308066 | orchestrator | 2025-09-20 10:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:07.349824 | orchestrator | 2025-09-20 10:57:07 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:07.352373 | orchestrator | 2025-09-20 10:57:07 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:07.353357 | orchestrator | 2025-09-20 10:57:07 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:07.353385 | orchestrator | 2025-09-20 10:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:10.383326 | orchestrator | 2025-09-20 10:57:10 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:10.385474 | orchestrator | 2025-09-20 10:57:10 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:10.386885 | orchestrator | 2025-09-20 10:57:10 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:10.387154 | orchestrator | 2025-09-20 10:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:13.418861 | orchestrator | 2025-09-20 10:57:13 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:13.419530 | orchestrator | 2025-09-20 10:57:13 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:13.420062 | orchestrator | 2025-09-20 10:57:13 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:13.420313 | orchestrator | 2025-09-20 10:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:16.458923 | orchestrator | 2025-09-20 10:57:16 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:16.459290 | orchestrator | 2025-09-20 10:57:16 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:16.461095 | orchestrator | 2025-09-20 10:57:16 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:16.461151 | orchestrator | 2025-09-20 10:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:19.493854 | orchestrator | 2025-09-20 10:57:19 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:19.495531 | orchestrator | 2025-09-20 10:57:19 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:19.496996 | orchestrator | 2025-09-20 10:57:19 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:19.497181 | orchestrator | 2025-09-20 10:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:22.531438 | orchestrator | 2025-09-20 10:57:22 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:22.533152 | orchestrator | 2025-09-20 10:57:22 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:22.534678 | orchestrator | 2025-09-20 10:57:22 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:22.534705 | orchestrator | 2025-09-20 10:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:25.581627 | orchestrator | 2025-09-20 10:57:25 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:25.583859 | orchestrator | 2025-09-20 10:57:25 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:25.584947 | orchestrator | 2025-09-20 10:57:25 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:25.585424 | orchestrator | 2025-09-20 10:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:28.620957 | orchestrator | 2025-09-20 10:57:28 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:28.622936 | orchestrator | 2025-09-20 10:57:28 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:28.625287 | orchestrator | 2025-09-20 10:57:28 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:28.626364 | orchestrator | 2025-09-20 10:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:31.657594 | orchestrator | 2025-09-20 10:57:31 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:31.658919 | orchestrator | 2025-09-20 10:57:31 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:31.660191 | orchestrator | 2025-09-20 10:57:31 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:31.660220 | orchestrator | 2025-09-20 10:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:34.693893 | orchestrator | 2025-09-20 10:57:34 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:34.694446 | orchestrator | 2025-09-20 10:57:34 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:34.695040 | orchestrator | 2025-09-20 10:57:34 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:34.695517 | orchestrator | 2025-09-20 10:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:37.734535 | orchestrator | 2025-09-20 10:57:37 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:37.736885 | orchestrator | 2025-09-20 10:57:37 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:37.738876 | orchestrator | 2025-09-20 10:57:37 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:37.738958 | orchestrator | 2025-09-20 10:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:40.776383 | orchestrator | 2025-09-20 10:57:40 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:40.777610 | orchestrator | 2025-09-20 10:57:40 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:40.779537 | orchestrator | 2025-09-20 10:57:40 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:40.780082 | orchestrator | 2025-09-20 10:57:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:43.835627 | orchestrator | 2025-09-20 10:57:43 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:43.839737 | orchestrator | 2025-09-20 10:57:43 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:43.842414 | orchestrator | 2025-09-20 10:57:43 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:43.842467 | orchestrator | 2025-09-20 10:57:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:46.899810 | orchestrator | 2025-09-20 10:57:46 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:46.901614 | orchestrator | 2025-09-20 10:57:46 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:46.902891 | orchestrator | 2025-09-20 10:57:46 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:46.902902 | orchestrator | 2025-09-20 10:57:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:49.962771 | orchestrator | 2025-09-20 10:57:49 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:49.963779 | orchestrator | 2025-09-20 10:57:49 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:49.966299 | orchestrator | 2025-09-20 10:57:49 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:49.967247 | orchestrator | 2025-09-20 10:57:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:53.015305 | orchestrator | 2025-09-20 10:57:53 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:53.017589 | orchestrator | 2025-09-20 10:57:53 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:53.020111 | orchestrator | 2025-09-20 10:57:53 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:53.020139 | orchestrator | 2025-09-20 10:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:56.067487 | orchestrator | 2025-09-20 10:57:56 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:56.069363 | orchestrator | 2025-09-20 10:57:56 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:56.070899 | orchestrator | 2025-09-20 10:57:56 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:56.070929 | orchestrator | 2025-09-20 10:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:57:59.116584 | orchestrator | 2025-09-20 10:57:59 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state STARTED 2025-09-20 10:57:59.118848 | orchestrator | 2025-09-20 10:57:59 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:57:59.121174 | orchestrator | 2025-09-20 10:57:59 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:57:59.121232 | orchestrator | 2025-09-20 10:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:02.159926 | orchestrator | 2025-09-20 10:58:02.160154 | orchestrator | 2025-09-20 10:58:02.160181 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-20 10:58:02.160194 | orchestrator | 2025-09-20 10:58:02.160206 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-20 10:58:02.160218 | orchestrator | Saturday 20 September 2025 10:55:53 +0000 (0:00:00.535) 0:00:00.535 **** 2025-09-20 10:58:02.160230 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:58:02.160242 | orchestrator | 2025-09-20 10:58:02.160254 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-20 10:58:02.160265 | orchestrator | Saturday 20 September 2025 10:55:53 +0000 (0:00:00.546) 0:00:01.082 **** 2025-09-20 10:58:02.160276 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.160288 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.160299 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.160310 | orchestrator | 2025-09-20 10:58:02.160321 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-20 10:58:02.160332 | orchestrator | Saturday 20 September 2025 10:55:54 +0000 (0:00:00.666) 0:00:01.748 **** 2025-09-20 10:58:02.160343 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.160354 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.160365 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.160375 | orchestrator | 2025-09-20 10:58:02.160402 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-20 10:58:02.161048 | orchestrator | Saturday 20 September 2025 10:55:54 +0000 (0:00:00.271) 0:00:02.020 **** 2025-09-20 10:58:02.161064 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.161075 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.161087 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.161098 | orchestrator | 2025-09-20 10:58:02.161374 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-20 10:58:02.161391 | orchestrator | Saturday 20 September 2025 10:55:55 +0000 (0:00:00.766) 0:00:02.786 **** 2025-09-20 10:58:02.161402 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.161413 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.161424 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.161435 | orchestrator | 2025-09-20 10:58:02.161446 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-20 10:58:02.161457 | orchestrator | Saturday 20 September 2025 10:55:55 +0000 (0:00:00.408) 0:00:03.194 **** 2025-09-20 10:58:02.161468 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.161479 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.161490 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.161501 | orchestrator | 2025-09-20 10:58:02.161512 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-20 10:58:02.161523 | orchestrator | Saturday 20 September 2025 10:55:56 +0000 (0:00:00.423) 0:00:03.618 **** 2025-09-20 10:58:02.161534 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.161544 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.161555 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.161566 | orchestrator | 2025-09-20 10:58:02.161577 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-20 10:58:02.161588 | orchestrator | Saturday 20 September 2025 10:55:56 +0000 (0:00:00.327) 0:00:03.945 **** 2025-09-20 10:58:02.161599 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.161611 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.161622 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.161632 | orchestrator | 2025-09-20 10:58:02.161643 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-20 10:58:02.161654 | orchestrator | Saturday 20 September 2025 10:55:57 +0000 (0:00:00.534) 0:00:04.480 **** 2025-09-20 10:58:02.161665 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.161676 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.161687 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.161698 | orchestrator | 2025-09-20 10:58:02.161709 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-20 10:58:02.161734 | orchestrator | Saturday 20 September 2025 10:55:57 +0000 (0:00:00.320) 0:00:04.800 **** 2025-09-20 10:58:02.161746 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:58:02.161757 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:58:02.161768 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:58:02.161778 | orchestrator | 2025-09-20 10:58:02.161789 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-20 10:58:02.161800 | orchestrator | Saturday 20 September 2025 10:55:58 +0000 (0:00:00.652) 0:00:05.453 **** 2025-09-20 10:58:02.161811 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.161822 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.161833 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.161844 | orchestrator | 2025-09-20 10:58:02.161855 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-20 10:58:02.161866 | orchestrator | Saturday 20 September 2025 10:55:58 +0000 (0:00:00.444) 0:00:05.897 **** 2025-09-20 10:58:02.161877 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:58:02.161887 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:58:02.161898 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:58:02.161909 | orchestrator | 2025-09-20 10:58:02.161920 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-20 10:58:02.161931 | orchestrator | Saturday 20 September 2025 10:56:00 +0000 (0:00:02.083) 0:00:07.980 **** 2025-09-20 10:58:02.161942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 10:58:02.161953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 10:58:02.161964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 10:58:02.161975 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.161986 | orchestrator | 2025-09-20 10:58:02.162087 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-20 10:58:02.162157 | orchestrator | Saturday 20 September 2025 10:56:01 +0000 (0:00:00.397) 0:00:08.378 **** 2025-09-20 10:58:02.162174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.162190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.162203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.162214 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162225 | orchestrator | 2025-09-20 10:58:02.162236 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-20 10:58:02.162257 | orchestrator | Saturday 20 September 2025 10:56:02 +0000 (0:00:00.919) 0:00:09.297 **** 2025-09-20 10:58:02.162270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.162284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.162305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.162317 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162328 | orchestrator | 2025-09-20 10:58:02.162339 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-20 10:58:02.162350 | orchestrator | Saturday 20 September 2025 10:56:02 +0000 (0:00:00.154) 0:00:09.452 **** 2025-09-20 10:58:02.162363 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '58b61afea484', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-20 10:55:59.302060', 'end': '2025-09-20 10:55:59.346278', 'delta': '0:00:00.044218', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['58b61afea484'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-20 10:58:02.162379 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0082811a5e96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-20 10:56:00.069495', 'end': '2025-09-20 10:56:00.118083', 'delta': '0:00:00.048588', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0082811a5e96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-20 10:58:02.162423 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '62fe8021ea63', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-20 10:56:00.580908', 'end': '2025-09-20 10:56:00.627209', 'delta': '0:00:00.046301', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['62fe8021ea63'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-20 10:58:02.162437 | orchestrator | 2025-09-20 10:58:02.162449 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-20 10:58:02.162460 | orchestrator | Saturday 20 September 2025 10:56:02 +0000 (0:00:00.392) 0:00:09.844 **** 2025-09-20 10:58:02.162471 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.162482 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.162493 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.162504 | orchestrator | 2025-09-20 10:58:02.162521 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-20 10:58:02.162532 | orchestrator | Saturday 20 September 2025 10:56:03 +0000 (0:00:00.474) 0:00:10.319 **** 2025-09-20 10:58:02.162550 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-20 10:58:02.162562 | orchestrator | 2025-09-20 10:58:02.162573 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-20 10:58:02.162584 | orchestrator | Saturday 20 September 2025 10:56:04 +0000 (0:00:01.702) 0:00:12.021 **** 2025-09-20 10:58:02.162594 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162606 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.162617 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.162627 | orchestrator | 2025-09-20 10:58:02.162639 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-20 10:58:02.162649 | orchestrator | Saturday 20 September 2025 10:56:05 +0000 (0:00:00.321) 0:00:12.342 **** 2025-09-20 10:58:02.162660 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162671 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.162682 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.162693 | orchestrator | 2025-09-20 10:58:02.162704 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 10:58:02.162715 | orchestrator | Saturday 20 September 2025 10:56:05 +0000 (0:00:00.459) 0:00:12.801 **** 2025-09-20 10:58:02.162726 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162737 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.162748 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.162759 | orchestrator | 2025-09-20 10:58:02.162770 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-20 10:58:02.162781 | orchestrator | Saturday 20 September 2025 10:56:06 +0000 (0:00:00.566) 0:00:13.368 **** 2025-09-20 10:58:02.162792 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.162803 | orchestrator | 2025-09-20 10:58:02.162814 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-20 10:58:02.162825 | orchestrator | Saturday 20 September 2025 10:56:06 +0000 (0:00:00.127) 0:00:13.495 **** 2025-09-20 10:58:02.162836 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162847 | orchestrator | 2025-09-20 10:58:02.162858 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 10:58:02.162869 | orchestrator | Saturday 20 September 2025 10:56:06 +0000 (0:00:00.256) 0:00:13.752 **** 2025-09-20 10:58:02.162880 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.162894 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.162913 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.162931 | orchestrator | 2025-09-20 10:58:02.162949 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-20 10:58:02.162969 | orchestrator | Saturday 20 September 2025 10:56:06 +0000 (0:00:00.302) 0:00:14.054 **** 2025-09-20 10:58:02.163086 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.163112 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.163126 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.163137 | orchestrator | 2025-09-20 10:58:02.163148 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-20 10:58:02.163159 | orchestrator | Saturday 20 September 2025 10:56:07 +0000 (0:00:00.316) 0:00:14.370 **** 2025-09-20 10:58:02.163170 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.163181 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.163192 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.163202 | orchestrator | 2025-09-20 10:58:02.163214 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-20 10:58:02.163224 | orchestrator | Saturday 20 September 2025 10:56:07 +0000 (0:00:00.493) 0:00:14.864 **** 2025-09-20 10:58:02.163235 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.163246 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.163257 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.163268 | orchestrator | 2025-09-20 10:58:02.163279 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-20 10:58:02.163300 | orchestrator | Saturday 20 September 2025 10:56:07 +0000 (0:00:00.329) 0:00:15.193 **** 2025-09-20 10:58:02.163311 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.163321 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.163332 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.163343 | orchestrator | 2025-09-20 10:58:02.163354 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-20 10:58:02.163365 | orchestrator | Saturday 20 September 2025 10:56:08 +0000 (0:00:00.328) 0:00:15.522 **** 2025-09-20 10:58:02.163377 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.163395 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.163412 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.163428 | orchestrator | 2025-09-20 10:58:02.163447 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-20 10:58:02.163533 | orchestrator | Saturday 20 September 2025 10:56:08 +0000 (0:00:00.332) 0:00:15.854 **** 2025-09-20 10:58:02.163555 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.163567 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.163578 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.163588 | orchestrator | 2025-09-20 10:58:02.163599 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-20 10:58:02.163610 | orchestrator | Saturday 20 September 2025 10:56:09 +0000 (0:00:00.529) 0:00:16.383 **** 2025-09-20 10:58:02.163623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504', 'dm-uuid-LVM-GFTN8eCjsDhsvHcLnbBW6Hiira8lKL1udVxFf2qXf8ZmfdhnZhdqKcyB2IPw7k07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04', 'dm-uuid-LVM-6Wey4TM1haZ7gjGkCtFB3Rfa02eGaKNTX7bv20h3mT29Iv3VklR0vD9Ut9ae9rNk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be', 'dm-uuid-LVM-zmHMarQ0GeOvwHa2octALRNuK9Mtv96G2WqbUvaxLe5TQX9CF3AdvLI3zAtwFBsi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084', 'dm-uuid-LVM-CivDXxDPNlR0kW7Fk5YuJ0VOnz3lPYRaQs2J5U20gpn0B3yD0ZtOYrjPGdQ6y2jA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.163908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FHfINo-QbB8-1gtM-lHmb-aZM1-kVg4-ymeA3K', 'scsi-0QEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf', 'scsi-SQEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.163935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UsQCdZ-EDR1-kfc0-B15p-aM1k-8uJJ-f2yIAP', 'scsi-0QEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949', 'scsi-SQEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.163962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.163972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f', 'scsi-SQEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.163983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164086 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.164108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CcMY6h-Yvso-Pyog-AJRg-iOyn-jVml-1IjQeN', 'scsi-0QEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652', 'scsi-SQEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-We5Mb4-JMDz-2gCV-40VR-14de-936x-g35BLT', 'scsi-0QEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58', 'scsi-SQEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19', 'dm-uuid-LVM-NIIieIrZpMwiF4zA1j7rPvZFGNxRh5VjUBRdeW4vjom4PIIluTcQ5EkcZbkGczdj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22', 'scsi-SQEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d', 'dm-uuid-LVM-2G92QntVglL9q1MRd9Z9LPlS1Py9FnebRqBci7ddGzK8oFVFlZTRxaxGpVx2OTF1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164277 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.164287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 10:58:02.164384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2pvgxu-QNeD-ciqZ-JOIj-NCHU-4b2C-6GfcOT', 'scsi-0QEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b', 'scsi-SQEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vgBYvF-M13W-988Q-HWt3-20j3-qTzr-1oUxcy', 'scsi-0QEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8', 'scsi-SQEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b', 'scsi-SQEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 10:58:02.164467 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.164483 | orchestrator | 2025-09-20 10:58:02.164498 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-20 10:58:02.164514 | orchestrator | Saturday 20 September 2025 10:56:09 +0000 (0:00:00.549) 0:00:16.932 **** 2025-09-20 10:58:02.164539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504', 'dm-uuid-LVM-GFTN8eCjsDhsvHcLnbBW6Hiira8lKL1udVxFf2qXf8ZmfdhnZhdqKcyB2IPw7k07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04', 'dm-uuid-LVM-6Wey4TM1haZ7gjGkCtFB3Rfa02eGaKNTX7bv20h3mT29Iv3VklR0vD9Ut9ae9rNk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164684 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be', 'dm-uuid-LVM-zmHMarQ0GeOvwHa2octALRNuK9Mtv96G2WqbUvaxLe5TQX9CF3AdvLI3zAtwFBsi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ecbda4c-6cf3-44c9-8aeb-a6bc4739978d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084', 'dm-uuid-LVM-CivDXxDPNlR0kW7Fk5YuJ0VOnz3lPYRaQs2J5U20gpn0B3yD0ZtOYrjPGdQ6y2jA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8bfbaad6--401f--511d--91f2--acbf67028504-osd--block--8bfbaad6--401f--511d--91f2--acbf67028504'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FHfINo-QbB8-1gtM-lHmb-aZM1-kVg4-ymeA3K', 'scsi-0QEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf', 'scsi-SQEMU_QEMU_HARDDISK_497e6100-ba4e-4e70-85f7-b35af0c206cf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--44b8c0b1--de10--587f--a252--374190a68e04-osd--block--44b8c0b1--de10--587f--a252--374190a68e04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UsQCdZ-EDR1-kfc0-B15p-aM1k-8uJJ-f2yIAP', 'scsi-0QEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949', 'scsi-SQEMU_QEMU_HARDDISK_696d6a7f-e2ae-4e31-b4d8-740f0d8ea949'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f', 'scsi-SQEMU_QEMU_HARDDISK_31f92631-138d-4bd6-ad62-32e6ca0c065f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.164960 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.164971 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16', 'scsi-SQEMU_QEMU_HARDDISK_43e39c93-643c-4c1a-98e9-cd9d81c0dd99-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165033 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6a9e85d2--bd62--5d0b--9b06--ebe373b508be-osd--block--6a9e85d2--bd62--5d0b--9b06--ebe373b508be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CcMY6h-Yvso-Pyog-AJRg-iOyn-jVml-1IjQeN', 'scsi-0QEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652', 'scsi-SQEMU_QEMU_HARDDISK_28f1987a-6b2b-4def-9528-f2d7153ba652'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d7feb156--b84d--561e--a62b--66fdb35e8084-osd--block--d7feb156--b84d--561e--a62b--66fdb35e8084'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-We5Mb4-JMDz-2gCV-40VR-14de-936x-g35BLT', 'scsi-0QEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58', 'scsi-SQEMU_QEMU_HARDDISK_21304f64-4c3c-4785-baa1-44b6b0fccd58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22', 'scsi-SQEMU_QEMU_HARDDISK_7249c7d6-d18e-42b1-809d-80705e221d22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-29-00']}, 'model': 'QEMU 2025-09-20 10:58:02 | INFO  | Task cdb2b011-7b5c-4a75-8aad-08acc6155800 is in state SUCCESS 2025-09-20 10:58:02.165082 | orchestrator | DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165092 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.165107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19', 'dm-uuid-LVM-NIIieIrZpMwiF4zA1j7rPvZFGNxRh5VjUBRdeW4vjom4PIIluTcQ5EkcZbkGczdj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d', 'dm-uuid-LVM-2G92QntVglL9q1MRd9Z9LPlS1Py9FnebRqBci7ddGzK8oFVFlZTRxaxGpVx2OTF1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165249 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16', 'scsi-SQEMU_QEMU_HARDDISK_dae07622-8d8b-4700-ac99-c09b16db109d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--43c75cb2--27fe--5978--b049--f1a35c211e19-osd--block--43c75cb2--27fe--5978--b049--f1a35c211e19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2pvgxu-QNeD-ciqZ-JOIj-NCHU-4b2C-6GfcOT', 'scsi-0QEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b', 'scsi-SQEMU_QEMU_HARDDISK_31ba085f-693b-4453-b385-26f20a05fd2b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165278 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f41c3a47--393d--5abf--86b9--e0c2e1b7064d-osd--block--f41c3a47--393d--5abf--86b9--e0c2e1b7064d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vgBYvF-M13W-988Q-HWt3-20j3-qTzr-1oUxcy', 'scsi-0QEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8', 'scsi-SQEMU_QEMU_HARDDISK_c8bcd070-709d-401e-b3ff-1d1dc46d20a8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b', 'scsi-SQEMU_QEMU_HARDDISK_e293ec10-02fe-4251-bcfc-ccec4462aa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-10-06-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 10:58:02.165324 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.165334 | orchestrator | 2025-09-20 10:58:02.165344 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-20 10:58:02.165354 | orchestrator | Saturday 20 September 2025 10:56:10 +0000 (0:00:00.754) 0:00:17.687 **** 2025-09-20 10:58:02.165364 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.165374 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.165384 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.165394 | orchestrator | 2025-09-20 10:58:02.165404 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-20 10:58:02.165414 | orchestrator | Saturday 20 September 2025 10:56:11 +0000 (0:00:00.686) 0:00:18.373 **** 2025-09-20 10:58:02.165423 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.165433 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.165447 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.165457 | orchestrator | 2025-09-20 10:58:02.165467 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 10:58:02.165479 | orchestrator | Saturday 20 September 2025 10:56:11 +0000 (0:00:00.485) 0:00:18.858 **** 2025-09-20 10:58:02.165496 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.165513 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.165530 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.165546 | orchestrator | 2025-09-20 10:58:02.165563 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 10:58:02.165580 | orchestrator | Saturday 20 September 2025 10:56:12 +0000 (0:00:00.640) 0:00:19.499 **** 2025-09-20 10:58:02.165599 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.165617 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.165634 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.165647 | orchestrator | 2025-09-20 10:58:02.165657 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 10:58:02.165666 | orchestrator | Saturday 20 September 2025 10:56:12 +0000 (0:00:00.311) 0:00:19.811 **** 2025-09-20 10:58:02.165676 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.165686 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.165695 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.165705 | orchestrator | 2025-09-20 10:58:02.165715 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 10:58:02.165725 | orchestrator | Saturday 20 September 2025 10:56:13 +0000 (0:00:00.453) 0:00:20.264 **** 2025-09-20 10:58:02.165734 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.165744 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.165754 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.165763 | orchestrator | 2025-09-20 10:58:02.165773 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-20 10:58:02.165783 | orchestrator | Saturday 20 September 2025 10:56:13 +0000 (0:00:00.526) 0:00:20.791 **** 2025-09-20 10:58:02.165792 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-20 10:58:02.165802 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-20 10:58:02.165832 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-20 10:58:02.165843 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-20 10:58:02.165852 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-20 10:58:02.165862 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-20 10:58:02.165871 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-20 10:58:02.165881 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-20 10:58:02.165890 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-20 10:58:02.165900 | orchestrator | 2025-09-20 10:58:02.165910 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-20 10:58:02.165928 | orchestrator | Saturday 20 September 2025 10:56:14 +0000 (0:00:00.888) 0:00:21.679 **** 2025-09-20 10:58:02.165938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 10:58:02.165947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 10:58:02.165957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 10:58:02.165966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-20 10:58:02.165976 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-20 10:58:02.165985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-20 10:58:02.166011 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166065 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.166075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-20 10:58:02.166085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-20 10:58:02.166094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-20 10:58:02.166104 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.166113 | orchestrator | 2025-09-20 10:58:02.166123 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-20 10:58:02.166133 | orchestrator | Saturday 20 September 2025 10:56:14 +0000 (0:00:00.379) 0:00:22.059 **** 2025-09-20 10:58:02.166143 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:58:02.166153 | orchestrator | 2025-09-20 10:58:02.166163 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-20 10:58:02.166173 | orchestrator | Saturday 20 September 2025 10:56:15 +0000 (0:00:00.732) 0:00:22.792 **** 2025-09-20 10:58:02.166191 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166202 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.166211 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.166221 | orchestrator | 2025-09-20 10:58:02.166234 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-20 10:58:02.166251 | orchestrator | Saturday 20 September 2025 10:56:15 +0000 (0:00:00.343) 0:00:23.135 **** 2025-09-20 10:58:02.166262 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166272 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.166281 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.166291 | orchestrator | 2025-09-20 10:58:02.166301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-20 10:58:02.166311 | orchestrator | Saturday 20 September 2025 10:56:16 +0000 (0:00:00.309) 0:00:23.445 **** 2025-09-20 10:58:02.166320 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166330 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.166339 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:58:02.166349 | orchestrator | 2025-09-20 10:58:02.166358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-20 10:58:02.166368 | orchestrator | Saturday 20 September 2025 10:56:16 +0000 (0:00:00.352) 0:00:23.797 **** 2025-09-20 10:58:02.166378 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.166387 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.166397 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.166407 | orchestrator | 2025-09-20 10:58:02.166422 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-20 10:58:02.166432 | orchestrator | Saturday 20 September 2025 10:56:17 +0000 (0:00:00.636) 0:00:24.434 **** 2025-09-20 10:58:02.166442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:58:02.166451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:58:02.166461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:58:02.166471 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166491 | orchestrator | 2025-09-20 10:58:02.166500 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-20 10:58:02.166510 | orchestrator | Saturday 20 September 2025 10:56:17 +0000 (0:00:00.388) 0:00:24.822 **** 2025-09-20 10:58:02.166520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:58:02.166529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:58:02.166539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:58:02.166548 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166558 | orchestrator | 2025-09-20 10:58:02.166567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-20 10:58:02.166577 | orchestrator | Saturday 20 September 2025 10:56:17 +0000 (0:00:00.399) 0:00:25.221 **** 2025-09-20 10:58:02.166587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 10:58:02.166597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 10:58:02.166606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 10:58:02.166616 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166626 | orchestrator | 2025-09-20 10:58:02.166635 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-20 10:58:02.166645 | orchestrator | Saturday 20 September 2025 10:56:18 +0000 (0:00:00.368) 0:00:25.590 **** 2025-09-20 10:58:02.166655 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:58:02.166664 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:58:02.166674 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:58:02.166684 | orchestrator | 2025-09-20 10:58:02.166693 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-20 10:58:02.166703 | orchestrator | Saturday 20 September 2025 10:56:18 +0000 (0:00:00.374) 0:00:25.965 **** 2025-09-20 10:58:02.166712 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 10:58:02.166722 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-20 10:58:02.166732 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-20 10:58:02.166741 | orchestrator | 2025-09-20 10:58:02.166751 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-20 10:58:02.166761 | orchestrator | Saturday 20 September 2025 10:56:19 +0000 (0:00:00.573) 0:00:26.538 **** 2025-09-20 10:58:02.166770 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:58:02.166780 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:58:02.166789 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:58:02.166798 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 10:58:02.166808 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 10:58:02.166818 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 10:58:02.166827 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 10:58:02.166837 | orchestrator | 2025-09-20 10:58:02.166847 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-20 10:58:02.166856 | orchestrator | Saturday 20 September 2025 10:56:20 +0000 (0:00:01.019) 0:00:27.558 **** 2025-09-20 10:58:02.166866 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 10:58:02.166875 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 10:58:02.166885 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 10:58:02.166895 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 10:58:02.166904 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 10:58:02.166914 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 10:58:02.166929 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 10:58:02.166945 | orchestrator | 2025-09-20 10:58:02.166955 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-20 10:58:02.166964 | orchestrator | Saturday 20 September 2025 10:56:22 +0000 (0:00:02.123) 0:00:29.681 **** 2025-09-20 10:58:02.166974 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:58:02.166984 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:58:02.167022 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-20 10:58:02.167032 | orchestrator | 2025-09-20 10:58:02.167042 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-20 10:58:02.167051 | orchestrator | Saturday 20 September 2025 10:56:22 +0000 (0:00:00.408) 0:00:30.090 **** 2025-09-20 10:58:02.167062 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:58:02.167077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:58:02.167088 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:58:02.167097 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:58:02.167107 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 10:58:02.167117 | orchestrator | 2025-09-20 10:58:02.167127 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-20 10:58:02.167136 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:45.452) 0:01:15.542 **** 2025-09-20 10:58:02.167146 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167156 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167165 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167175 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167184 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167194 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167204 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-20 10:58:02.167213 | orchestrator | 2025-09-20 10:58:02.167223 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-20 10:58:02.167233 | orchestrator | Saturday 20 September 2025 10:57:31 +0000 (0:00:22.986) 0:01:38.529 **** 2025-09-20 10:58:02.167242 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167252 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167261 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167271 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167286 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167296 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167306 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 10:58:02.167315 | orchestrator | 2025-09-20 10:58:02.167325 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-20 10:58:02.167334 | orchestrator | Saturday 20 September 2025 10:57:42 +0000 (0:00:11.060) 0:01:49.590 **** 2025-09-20 10:58:02.167344 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167354 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:58:02.167363 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:58:02.167373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167389 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:58:02.167399 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:58:02.167409 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167418 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:58:02.167428 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:58:02.167438 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167447 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:58:02.167457 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:58:02.167466 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167476 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:58:02.167485 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:58:02.167495 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 10:58:02.167512 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 10:58:02.167527 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 10:58:02.167537 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-20 10:58:02.167547 | orchestrator | 2025-09-20 10:58:02.167556 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:58:02.167566 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-20 10:58:02.167577 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 10:58:02.167587 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-20 10:58:02.167596 | orchestrator | 2025-09-20 10:58:02.167606 | orchestrator | 2025-09-20 10:58:02.167616 | orchestrator | 2025-09-20 10:58:02.167626 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:58:02.167635 | orchestrator | Saturday 20 September 2025 10:57:59 +0000 (0:00:17.402) 0:02:06.993 **** 2025-09-20 10:58:02.167645 | orchestrator | =============================================================================== 2025-09-20 10:58:02.167654 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.45s 2025-09-20 10:58:02.167664 | orchestrator | generate keys ---------------------------------------------------------- 22.99s 2025-09-20 10:58:02.167681 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.40s 2025-09-20 10:58:02.167691 | orchestrator | get keys from monitors ------------------------------------------------- 11.06s 2025-09-20 10:58:02.167701 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.12s 2025-09-20 10:58:02.167710 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.08s 2025-09-20 10:58:02.167720 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2025-09-20 10:58:02.167729 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-09-20 10:58:02.167739 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2025-09-20 10:58:02.167749 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.89s 2025-09-20 10:58:02.167758 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2025-09-20 10:58:02.167768 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.75s 2025-09-20 10:58:02.167778 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-09-20 10:58:02.167787 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-09-20 10:58:02.167797 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2025-09-20 10:58:02.167806 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-09-20 10:58:02.167816 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-09-20 10:58:02.167826 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.64s 2025-09-20 10:58:02.167835 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.57s 2025-09-20 10:58:02.167845 | orchestrator | ceph-facts : Set_fact fsid ---------------------------------------------- 0.57s 2025-09-20 10:58:02.167854 | orchestrator | 2025-09-20 10:58:02 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:02.167864 | orchestrator | 2025-09-20 10:58:02 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:02.167874 | orchestrator | 2025-09-20 10:58:02 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:02.167884 | orchestrator | 2025-09-20 10:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:05.194876 | orchestrator | 2025-09-20 10:58:05 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:05.195830 | orchestrator | 2025-09-20 10:58:05 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:05.197364 | orchestrator | 2025-09-20 10:58:05 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:05.197602 | orchestrator | 2025-09-20 10:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:08.240927 | orchestrator | 2025-09-20 10:58:08 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:08.243888 | orchestrator | 2025-09-20 10:58:08 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:08.246496 | orchestrator | 2025-09-20 10:58:08 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:08.246545 | orchestrator | 2025-09-20 10:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:11.288594 | orchestrator | 2025-09-20 10:58:11 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:11.290313 | orchestrator | 2025-09-20 10:58:11 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:11.291452 | orchestrator | 2025-09-20 10:58:11 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:11.291607 | orchestrator | 2025-09-20 10:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:14.330739 | orchestrator | 2025-09-20 10:58:14 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:14.331130 | orchestrator | 2025-09-20 10:58:14 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:14.331856 | orchestrator | 2025-09-20 10:58:14 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:14.331882 | orchestrator | 2025-09-20 10:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:17.375232 | orchestrator | 2025-09-20 10:58:17 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:17.376124 | orchestrator | 2025-09-20 10:58:17 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:17.379312 | orchestrator | 2025-09-20 10:58:17 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:17.379732 | orchestrator | 2025-09-20 10:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:20.426617 | orchestrator | 2025-09-20 10:58:20 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:20.427363 | orchestrator | 2025-09-20 10:58:20 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:20.430248 | orchestrator | 2025-09-20 10:58:20 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:20.430551 | orchestrator | 2025-09-20 10:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:23.484269 | orchestrator | 2025-09-20 10:58:23 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:23.486482 | orchestrator | 2025-09-20 10:58:23 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:23.487213 | orchestrator | 2025-09-20 10:58:23 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:23.487238 | orchestrator | 2025-09-20 10:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:26.537314 | orchestrator | 2025-09-20 10:58:26 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state STARTED 2025-09-20 10:58:26.539117 | orchestrator | 2025-09-20 10:58:26 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:26.541349 | orchestrator | 2025-09-20 10:58:26 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:26.541765 | orchestrator | 2025-09-20 10:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:29.594501 | orchestrator | 2025-09-20 10:58:29 | INFO  | Task 8cbc0933-cc27-431e-8aa4-4d8e1428e54c is in state SUCCESS 2025-09-20 10:58:29.595725 | orchestrator | 2025-09-20 10:58:29 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:29.597339 | orchestrator | 2025-09-20 10:58:29 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:29.598958 | orchestrator | 2025-09-20 10:58:29 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:29.599026 | orchestrator | 2025-09-20 10:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:32.665205 | orchestrator | 2025-09-20 10:58:32 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:32.666716 | orchestrator | 2025-09-20 10:58:32 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:32.669405 | orchestrator | 2025-09-20 10:58:32 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:32.669463 | orchestrator | 2025-09-20 10:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:35.708360 | orchestrator | 2025-09-20 10:58:35 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:35.709801 | orchestrator | 2025-09-20 10:58:35 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:35.710546 | orchestrator | 2025-09-20 10:58:35 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:35.710698 | orchestrator | 2025-09-20 10:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:38.741416 | orchestrator | 2025-09-20 10:58:38 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state STARTED 2025-09-20 10:58:38.741683 | orchestrator | 2025-09-20 10:58:38 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:38.742606 | orchestrator | 2025-09-20 10:58:38 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:38.742635 | orchestrator | 2025-09-20 10:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:41.778562 | orchestrator | 2025-09-20 10:58:41 | INFO  | Task 79de8fa0-7caf-4d0b-9353-549ea7495f6b is in state SUCCESS 2025-09-20 10:58:41.779664 | orchestrator | 2025-09-20 10:58:41.779689 | orchestrator | 2025-09-20 10:58:41.779696 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-20 10:58:41.779702 | orchestrator | 2025-09-20 10:58:41.779707 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-20 10:58:41.779713 | orchestrator | Saturday 20 September 2025 10:58:03 +0000 (0:00:00.147) 0:00:00.147 **** 2025-09-20 10:58:41.779718 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-20 10:58:41.779725 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779730 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779735 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 10:58:41.779741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779746 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-20 10:58:41.779751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-20 10:58:41.779755 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-20 10:58:41.779760 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-20 10:58:41.779765 | orchestrator | 2025-09-20 10:58:41.779769 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-20 10:58:41.779774 | orchestrator | Saturday 20 September 2025 10:58:07 +0000 (0:00:03.876) 0:00:04.024 **** 2025-09-20 10:58:41.779779 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 10:58:41.779783 | orchestrator | 2025-09-20 10:58:41.779788 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-20 10:58:41.779792 | orchestrator | Saturday 20 September 2025 10:58:08 +0000 (0:00:00.890) 0:00:04.914 **** 2025-09-20 10:58:41.779797 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-20 10:58:41.779801 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779806 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779810 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 10:58:41.779834 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779838 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-20 10:58:41.779843 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-20 10:58:41.779847 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-20 10:58:41.779851 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-20 10:58:41.779855 | orchestrator | 2025-09-20 10:58:41.779860 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-20 10:58:41.779864 | orchestrator | Saturday 20 September 2025 10:58:20 +0000 (0:00:12.083) 0:00:16.997 **** 2025-09-20 10:58:41.779878 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-20 10:58:41.779883 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779893 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779897 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 10:58:41.779902 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-20 10:58:41.779906 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-20 10:58:41.779910 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-20 10:58:41.779914 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-20 10:58:41.779919 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-20 10:58:41.779923 | orchestrator | 2025-09-20 10:58:41.779927 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:58:41.779932 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:58:41.779938 | orchestrator | 2025-09-20 10:58:41.779943 | orchestrator | 2025-09-20 10:58:41.779947 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:58:41.779962 | orchestrator | Saturday 20 September 2025 10:58:27 +0000 (0:00:06.835) 0:00:23.833 **** 2025-09-20 10:58:41.779966 | orchestrator | =============================================================================== 2025-09-20 10:58:41.780001 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.08s 2025-09-20 10:58:41.780006 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.84s 2025-09-20 10:58:41.780011 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.88s 2025-09-20 10:58:41.780015 | orchestrator | Create share directory -------------------------------------------------- 0.89s 2025-09-20 10:58:41.780019 | orchestrator | 2025-09-20 10:58:41.780023 | orchestrator | 2025-09-20 10:58:41.780028 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:58:41.780059 | orchestrator | 2025-09-20 10:58:41.780074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:58:41.780103 | orchestrator | Saturday 20 September 2025 10:57:03 +0000 (0:00:00.268) 0:00:00.268 **** 2025-09-20 10:58:41.780109 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780114 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780119 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780123 | orchestrator | 2025-09-20 10:58:41.780127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:58:41.780132 | orchestrator | Saturday 20 September 2025 10:57:03 +0000 (0:00:00.327) 0:00:00.596 **** 2025-09-20 10:58:41.780136 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-20 10:58:41.780141 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-20 10:58:41.780145 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-20 10:58:41.780156 | orchestrator | 2025-09-20 10:58:41.780161 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-20 10:58:41.780165 | orchestrator | 2025-09-20 10:58:41.780170 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 10:58:41.780174 | orchestrator | Saturday 20 September 2025 10:57:03 +0000 (0:00:00.437) 0:00:01.033 **** 2025-09-20 10:58:41.780178 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:58:41.780183 | orchestrator | 2025-09-20 10:58:41.780187 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-20 10:58:41.780192 | orchestrator | Saturday 20 September 2025 10:57:04 +0000 (0:00:00.516) 0:00:01.549 **** 2025-09-20 10:58:41.780201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.780218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.780227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.780233 | orchestrator | 2025-09-20 10:58:41.780239 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-20 10:58:41.780370 | orchestrator | Saturday 20 September 2025 10:57:05 +0000 (0:00:01.120) 0:00:02.670 **** 2025-09-20 10:58:41.780375 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780379 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780383 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780388 | orchestrator | 2025-09-20 10:58:41.780392 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 10:58:41.780397 | orchestrator | Saturday 20 September 2025 10:57:05 +0000 (0:00:00.458) 0:00:03.128 **** 2025-09-20 10:58:41.780401 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-20 10:58:41.780410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-20 10:58:41.780418 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-20 10:58:41.780422 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-20 10:58:41.780426 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-20 10:58:41.780431 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-20 10:58:41.780435 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-20 10:58:41.780439 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-20 10:58:41.780444 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-20 10:58:41.780448 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-20 10:58:41.780452 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-20 10:58:41.780457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-20 10:58:41.780461 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-20 10:58:41.780465 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-20 10:58:41.780470 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-20 10:58:41.780474 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-20 10:58:41.780478 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-20 10:58:41.780483 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-20 10:58:41.780487 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-20 10:58:41.780491 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-20 10:58:41.780496 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-20 10:58:41.780500 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-20 10:58:41.780504 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-20 10:58:41.780508 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-20 10:58:41.780513 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-20 10:58:41.780519 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-20 10:58:41.780524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-20 10:58:41.780528 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-20 10:58:41.780532 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-20 10:58:41.780537 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-20 10:58:41.780541 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-20 10:58:41.780545 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-20 10:58:41.780559 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-20 10:58:41.780563 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-20 10:58:41.780568 | orchestrator | 2025-09-20 10:58:41.780575 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.780580 | orchestrator | Saturday 20 September 2025 10:57:06 +0000 (0:00:00.760) 0:00:03.889 **** 2025-09-20 10:58:41.780585 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780589 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780593 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780598 | orchestrator | 2025-09-20 10:58:41.780602 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.780607 | orchestrator | Saturday 20 September 2025 10:57:06 +0000 (0:00:00.295) 0:00:04.185 **** 2025-09-20 10:58:41.780611 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780615 | orchestrator | 2025-09-20 10:58:41.780620 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.780627 | orchestrator | Saturday 20 September 2025 10:57:07 +0000 (0:00:00.131) 0:00:04.317 **** 2025-09-20 10:58:41.780631 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780636 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.780640 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.780645 | orchestrator | 2025-09-20 10:58:41.780649 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.780654 | orchestrator | Saturday 20 September 2025 10:57:07 +0000 (0:00:00.447) 0:00:04.764 **** 2025-09-20 10:58:41.780658 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780662 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780667 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780671 | orchestrator | 2025-09-20 10:58:41.780675 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.780680 | orchestrator | Saturday 20 September 2025 10:57:07 +0000 (0:00:00.301) 0:00:05.065 **** 2025-09-20 10:58:41.780684 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780689 | orchestrator | 2025-09-20 10:58:41.780693 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.780697 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:00.155) 0:00:05.221 **** 2025-09-20 10:58:41.780702 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780706 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.780711 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.780715 | orchestrator | 2025-09-20 10:58:41.780719 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.780724 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:00.274) 0:00:05.495 **** 2025-09-20 10:58:41.780728 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780733 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780737 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780742 | orchestrator | 2025-09-20 10:58:41.780746 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.780750 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:00.270) 0:00:05.765 **** 2025-09-20 10:58:41.780755 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780759 | orchestrator | 2025-09-20 10:58:41.780764 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.780768 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:00.122) 0:00:05.888 **** 2025-09-20 10:58:41.780772 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780777 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.780781 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.780785 | orchestrator | 2025-09-20 10:58:41.780793 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.780798 | orchestrator | Saturday 20 September 2025 10:57:09 +0000 (0:00:00.393) 0:00:06.282 **** 2025-09-20 10:58:41.780802 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780807 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780811 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780815 | orchestrator | 2025-09-20 10:58:41.780820 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.780824 | orchestrator | Saturday 20 September 2025 10:57:09 +0000 (0:00:00.271) 0:00:06.554 **** 2025-09-20 10:58:41.780829 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780833 | orchestrator | 2025-09-20 10:58:41.780837 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.780842 | orchestrator | Saturday 20 September 2025 10:57:09 +0000 (0:00:00.148) 0:00:06.702 **** 2025-09-20 10:58:41.780846 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780850 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.780855 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.780859 | orchestrator | 2025-09-20 10:58:41.780863 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.780868 | orchestrator | Saturday 20 September 2025 10:57:09 +0000 (0:00:00.270) 0:00:06.973 **** 2025-09-20 10:58:41.780872 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.780876 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.780881 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.780885 | orchestrator | 2025-09-20 10:58:41.780889 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.780894 | orchestrator | Saturday 20 September 2025 10:57:10 +0000 (0:00:00.271) 0:00:07.244 **** 2025-09-20 10:58:41.780898 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780902 | orchestrator | 2025-09-20 10:58:41.780907 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.780911 | orchestrator | Saturday 20 September 2025 10:57:10 +0000 (0:00:00.274) 0:00:07.518 **** 2025-09-20 10:58:41.780916 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.780920 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.780924 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.780929 | orchestrator | 2025-09-20 10:58:41.780933 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.780937 | orchestrator | Saturday 20 September 2025 10:57:10 +0000 (0:00:00.303) 0:00:07.822 **** 2025-09-20 10:58:41.780996 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.781002 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.781006 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.781011 | orchestrator | 2025-09-20 10:58:41.781015 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.781023 | orchestrator | Saturday 20 September 2025 10:57:10 +0000 (0:00:00.298) 0:00:08.120 **** 2025-09-20 10:58:41.781027 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781032 | orchestrator | 2025-09-20 10:58:41.781036 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.781041 | orchestrator | Saturday 20 September 2025 10:57:11 +0000 (0:00:00.132) 0:00:08.252 **** 2025-09-20 10:58:41.781045 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781050 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781054 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781058 | orchestrator | 2025-09-20 10:58:41.781063 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.781067 | orchestrator | Saturday 20 September 2025 10:57:11 +0000 (0:00:00.262) 0:00:08.514 **** 2025-09-20 10:58:41.781072 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.781076 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.781081 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.781085 | orchestrator | 2025-09-20 10:58:41.781093 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.781101 | orchestrator | Saturday 20 September 2025 10:57:11 +0000 (0:00:00.401) 0:00:08.916 **** 2025-09-20 10:58:41.781106 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781110 | orchestrator | 2025-09-20 10:58:41.781114 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.781119 | orchestrator | Saturday 20 September 2025 10:57:11 +0000 (0:00:00.119) 0:00:09.036 **** 2025-09-20 10:58:41.781123 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781128 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781132 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781136 | orchestrator | 2025-09-20 10:58:41.781141 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.781145 | orchestrator | Saturday 20 September 2025 10:57:12 +0000 (0:00:00.277) 0:00:09.313 **** 2025-09-20 10:58:41.781149 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.781154 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.781158 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.781162 | orchestrator | 2025-09-20 10:58:41.781167 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.781171 | orchestrator | Saturday 20 September 2025 10:57:12 +0000 (0:00:00.352) 0:00:09.666 **** 2025-09-20 10:58:41.781175 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781180 | orchestrator | 2025-09-20 10:58:41.781184 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.781189 | orchestrator | Saturday 20 September 2025 10:57:12 +0000 (0:00:00.102) 0:00:09.769 **** 2025-09-20 10:58:41.781193 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781197 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781202 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781206 | orchestrator | 2025-09-20 10:58:41.781210 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.781215 | orchestrator | Saturday 20 September 2025 10:57:12 +0000 (0:00:00.238) 0:00:10.008 **** 2025-09-20 10:58:41.781219 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.781223 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.781228 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.781232 | orchestrator | 2025-09-20 10:58:41.781237 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.781241 | orchestrator | Saturday 20 September 2025 10:57:13 +0000 (0:00:00.410) 0:00:10.418 **** 2025-09-20 10:58:41.781245 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781249 | orchestrator | 2025-09-20 10:58:41.781254 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.781258 | orchestrator | Saturday 20 September 2025 10:57:13 +0000 (0:00:00.117) 0:00:10.536 **** 2025-09-20 10:58:41.781262 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781267 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781271 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781275 | orchestrator | 2025-09-20 10:58:41.781280 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 10:58:41.781284 | orchestrator | Saturday 20 September 2025 10:57:13 +0000 (0:00:00.267) 0:00:10.804 **** 2025-09-20 10:58:41.781288 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:58:41.781293 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:58:41.781297 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:58:41.781301 | orchestrator | 2025-09-20 10:58:41.781306 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 10:58:41.781310 | orchestrator | Saturday 20 September 2025 10:57:13 +0000 (0:00:00.305) 0:00:11.109 **** 2025-09-20 10:58:41.781314 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781319 | orchestrator | 2025-09-20 10:58:41.781323 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 10:58:41.781327 | orchestrator | Saturday 20 September 2025 10:57:14 +0000 (0:00:00.115) 0:00:11.225 **** 2025-09-20 10:58:41.781335 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781339 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781343 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781348 | orchestrator | 2025-09-20 10:58:41.781352 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-20 10:58:41.781356 | orchestrator | Saturday 20 September 2025 10:57:14 +0000 (0:00:00.399) 0:00:11.625 **** 2025-09-20 10:58:41.781361 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:58:41.781365 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:58:41.781369 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:58:41.781374 | orchestrator | 2025-09-20 10:58:41.781378 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-20 10:58:41.781382 | orchestrator | Saturday 20 September 2025 10:57:15 +0000 (0:00:01.458) 0:00:13.083 **** 2025-09-20 10:58:41.781387 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-20 10:58:41.781391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-20 10:58:41.781395 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-20 10:58:41.781400 | orchestrator | 2025-09-20 10:58:41.781404 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-20 10:58:41.781411 | orchestrator | Saturday 20 September 2025 10:57:17 +0000 (0:00:01.839) 0:00:14.922 **** 2025-09-20 10:58:41.781415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-20 10:58:41.781420 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-20 10:58:41.781424 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-20 10:58:41.781429 | orchestrator | 2025-09-20 10:58:41.781433 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-20 10:58:41.781438 | orchestrator | Saturday 20 September 2025 10:57:19 +0000 (0:00:02.053) 0:00:16.976 **** 2025-09-20 10:58:41.781445 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-20 10:58:41.781450 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-20 10:58:41.781454 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-20 10:58:41.781458 | orchestrator | 2025-09-20 10:58:41.781463 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-20 10:58:41.781467 | orchestrator | Saturday 20 September 2025 10:57:21 +0000 (0:00:02.190) 0:00:19.167 **** 2025-09-20 10:58:41.781472 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781476 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781480 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781484 | orchestrator | 2025-09-20 10:58:41.781489 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-20 10:58:41.781493 | orchestrator | Saturday 20 September 2025 10:57:22 +0000 (0:00:00.325) 0:00:19.492 **** 2025-09-20 10:58:41.781498 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781502 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781506 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781511 | orchestrator | 2025-09-20 10:58:41.781515 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 10:58:41.781520 | orchestrator | Saturday 20 September 2025 10:57:22 +0000 (0:00:00.295) 0:00:19.787 **** 2025-09-20 10:58:41.781524 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:58:41.781528 | orchestrator | 2025-09-20 10:58:41.781533 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-20 10:58:41.781537 | orchestrator | Saturday 20 September 2025 10:57:23 +0000 (0:00:00.575) 0:00:20.363 **** 2025-09-20 10:58:41.781546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.781558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.781571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.781576 | orchestrator | 2025-09-20 10:58:41.781581 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-20 10:58:41.781585 | orchestrator | Saturday 20 September 2025 10:57:24 +0000 (0:00:01.727) 0:00:22.091 **** 2025-09-20 10:58:41.781594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:58:41.781603 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:58:41.781618 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:58:41.781634 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781639 | orchestrator | 2025-09-20 10:58:41.781644 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-20 10:58:41.781649 | orchestrator | Saturday 20 September 2025 10:57:25 +0000 (0:00:00.613) 0:00:22.704 **** 2025-09-20 10:58:41.781661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:58:41.781667 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:58:41.781684 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 10:58:41.781706 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781711 | orchestrator | 2025-09-20 10:58:41.781716 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-20 10:58:41.781721 | orchestrator | Saturday 20 September 2025 10:57:26 +0000 (0:00:00.880) 0:00:23.584 **** 2025-09-20 10:58:41.781726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.781739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.781748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 10:58:41.781754 | orchestrator | 2025-09-20 10:58:41.781759 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 10:58:41.781763 | orchestrator | Saturday 20 September 2025 10:57:28 +0000 (0:00:01.749) 0:00:25.334 **** 2025-09-20 10:58:41.781768 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:58:41.781776 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:58:41.781781 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:58:41.781786 | orchestrator | 2025-09-20 10:58:41.781790 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 10:58:41.781795 | orchestrator | Saturday 20 September 2025 10:57:28 +0000 (0:00:00.314) 0:00:25.649 **** 2025-09-20 10:58:41.781800 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:58:41.781805 | orchestrator | 2025-09-20 10:58:41.781810 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-20 10:58:41.781815 | orchestrator | Saturday 20 September 2025 10:57:28 +0000 (0:00:00.555) 0:00:26.204 **** 2025-09-20 10:58:41.781820 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:58:41.781824 | orchestrator | 2025-09-20 10:58:41.781832 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-20 10:58:41.781841 | orchestrator | Saturday 20 September 2025 10:57:30 +0000 (0:00:01.981) 0:00:28.185 **** 2025-09-20 10:58:41.781846 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:58:41.781851 | orchestrator | 2025-09-20 10:58:41.781856 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-20 10:58:41.781860 | orchestrator | Saturday 20 September 2025 10:57:33 +0000 (0:00:02.367) 0:00:30.553 **** 2025-09-20 10:58:41.781865 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:58:41.781870 | orchestrator | 2025-09-20 10:58:41.781875 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-20 10:58:41.781880 | orchestrator | Saturday 20 September 2025 10:57:46 +0000 (0:00:12.894) 0:00:43.448 **** 2025-09-20 10:58:41.781884 | orchestrator | 2025-09-20 10:58:41.781889 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-20 10:58:41.781894 | orchestrator | Saturday 20 September 2025 10:57:46 +0000 (0:00:00.066) 0:00:43.515 **** 2025-09-20 10:58:41.781899 | orchestrator | 2025-09-20 10:58:41.781903 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-20 10:58:41.781908 | orchestrator | Saturday 20 September 2025 10:57:46 +0000 (0:00:00.063) 0:00:43.578 **** 2025-09-20 10:58:41.781913 | orchestrator | 2025-09-20 10:58:41.781918 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-20 10:58:41.781923 | orchestrator | Saturday 20 September 2025 10:57:46 +0000 (0:00:00.069) 0:00:43.648 **** 2025-09-20 10:58:41.781927 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:58:41.781932 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:58:41.781937 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:58:41.781942 | orchestrator | 2025-09-20 10:58:41.781947 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:58:41.781952 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-20 10:58:41.781957 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-20 10:58:41.781961 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-20 10:58:41.781966 | orchestrator | 2025-09-20 10:58:41.781981 | orchestrator | 2025-09-20 10:58:41.781987 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:58:41.781992 | orchestrator | Saturday 20 September 2025 10:58:41 +0000 (0:00:54.652) 0:01:38.300 **** 2025-09-20 10:58:41.781996 | orchestrator | =============================================================================== 2025-09-20 10:58:41.782001 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.65s 2025-09-20 10:58:41.782006 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 12.89s 2025-09-20 10:58:41.782011 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.37s 2025-09-20 10:58:41.782064 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.19s 2025-09-20 10:58:41.782071 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.05s 2025-09-20 10:58:41.782075 | orchestrator | horizon : Creating Horizon database ------------------------------------- 1.98s 2025-09-20 10:58:41.782079 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2025-09-20 10:58:41.782086 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.75s 2025-09-20 10:58:41.782093 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.73s 2025-09-20 10:58:41.782100 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.46s 2025-09-20 10:58:41.782106 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.12s 2025-09-20 10:58:41.782113 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2025-09-20 10:58:41.782126 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-09-20 10:58:41.782134 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2025-09-20 10:58:41.782140 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-09-20 10:58:41.782145 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-09-20 10:58:41.782149 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-20 10:58:41.782153 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.46s 2025-09-20 10:58:41.782157 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2025-09-20 10:58:41.782165 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-20 10:58:41.782170 | orchestrator | 2025-09-20 10:58:41 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:41.782789 | orchestrator | 2025-09-20 10:58:41 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:41.782895 | orchestrator | 2025-09-20 10:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:44.829109 | orchestrator | 2025-09-20 10:58:44 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:44.830623 | orchestrator | 2025-09-20 10:58:44 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:44.831085 | orchestrator | 2025-09-20 10:58:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:47.873711 | orchestrator | 2025-09-20 10:58:47 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:47.874409 | orchestrator | 2025-09-20 10:58:47 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:47.874503 | orchestrator | 2025-09-20 10:58:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:50.919856 | orchestrator | 2025-09-20 10:58:50 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:50.922184 | orchestrator | 2025-09-20 10:58:50 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:50.922578 | orchestrator | 2025-09-20 10:58:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:53.970085 | orchestrator | 2025-09-20 10:58:53 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:53.971618 | orchestrator | 2025-09-20 10:58:53 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:53.971644 | orchestrator | 2025-09-20 10:58:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:58:57.008435 | orchestrator | 2025-09-20 10:58:57 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:58:57.010803 | orchestrator | 2025-09-20 10:58:57 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:58:57.010840 | orchestrator | 2025-09-20 10:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:00.055912 | orchestrator | 2025-09-20 10:59:00 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:00.057009 | orchestrator | 2025-09-20 10:59:00 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:00.057042 | orchestrator | 2025-09-20 10:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:03.103585 | orchestrator | 2025-09-20 10:59:03 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:03.105524 | orchestrator | 2025-09-20 10:59:03 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:03.105584 | orchestrator | 2025-09-20 10:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:06.152146 | orchestrator | 2025-09-20 10:59:06 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:06.153906 | orchestrator | 2025-09-20 10:59:06 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:06.153952 | orchestrator | 2025-09-20 10:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:09.200150 | orchestrator | 2025-09-20 10:59:09 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:09.201719 | orchestrator | 2025-09-20 10:59:09 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:09.201752 | orchestrator | 2025-09-20 10:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:12.255120 | orchestrator | 2025-09-20 10:59:12 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:12.255781 | orchestrator | 2025-09-20 10:59:12 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:12.256224 | orchestrator | 2025-09-20 10:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:15.304564 | orchestrator | 2025-09-20 10:59:15 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:15.305663 | orchestrator | 2025-09-20 10:59:15 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:15.305699 | orchestrator | 2025-09-20 10:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:18.351223 | orchestrator | 2025-09-20 10:59:18 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:18.353870 | orchestrator | 2025-09-20 10:59:18 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:18.353885 | orchestrator | 2025-09-20 10:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:21.401247 | orchestrator | 2025-09-20 10:59:21 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:21.401884 | orchestrator | 2025-09-20 10:59:21 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state STARTED 2025-09-20 10:59:21.401934 | orchestrator | 2025-09-20 10:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:24.454870 | orchestrator | 2025-09-20 10:59:24 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:24.457037 | orchestrator | 2025-09-20 10:59:24 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:24.460763 | orchestrator | 2025-09-20 10:59:24 | INFO  | Task 40635dba-2a6b-4a14-bfcd-1abd55464d85 is in state SUCCESS 2025-09-20 10:59:24.462380 | orchestrator | 2025-09-20 10:59:24 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:24.464023 | orchestrator | 2025-09-20 10:59:24 | INFO  | Task 1bdb2d0f-012a-43bd-88c6-9f6b5e7fbe00 is in state STARTED 2025-09-20 10:59:24.464283 | orchestrator | 2025-09-20 10:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:27.514475 | orchestrator | 2025-09-20 10:59:27 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:27.516807 | orchestrator | 2025-09-20 10:59:27 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:27.516846 | orchestrator | 2025-09-20 10:59:27 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:27.516854 | orchestrator | 2025-09-20 10:59:27 | INFO  | Task 1bdb2d0f-012a-43bd-88c6-9f6b5e7fbe00 is in state SUCCESS 2025-09-20 10:59:27.516888 | orchestrator | 2025-09-20 10:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:30.602427 | orchestrator | 2025-09-20 10:59:30 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:30.602562 | orchestrator | 2025-09-20 10:59:30 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:30.602582 | orchestrator | 2025-09-20 10:59:30 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:30.602598 | orchestrator | 2025-09-20 10:59:30 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:30.602613 | orchestrator | 2025-09-20 10:59:30 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:30.602629 | orchestrator | 2025-09-20 10:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:33.600033 | orchestrator | 2025-09-20 10:59:33 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:33.601296 | orchestrator | 2025-09-20 10:59:33 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:33.602250 | orchestrator | 2025-09-20 10:59:33 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:33.603227 | orchestrator | 2025-09-20 10:59:33 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:33.604188 | orchestrator | 2025-09-20 10:59:33 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:33.604209 | orchestrator | 2025-09-20 10:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:36.639073 | orchestrator | 2025-09-20 10:59:36 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:36.639151 | orchestrator | 2025-09-20 10:59:36 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:36.639163 | orchestrator | 2025-09-20 10:59:36 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:36.639861 | orchestrator | 2025-09-20 10:59:36 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:36.640930 | orchestrator | 2025-09-20 10:59:36 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:36.640946 | orchestrator | 2025-09-20 10:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:39.682909 | orchestrator | 2025-09-20 10:59:39 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:39.683290 | orchestrator | 2025-09-20 10:59:39 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state STARTED 2025-09-20 10:59:39.685270 | orchestrator | 2025-09-20 10:59:39 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:39.686899 | orchestrator | 2025-09-20 10:59:39 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:39.689461 | orchestrator | 2025-09-20 10:59:39 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:39.689504 | orchestrator | 2025-09-20 10:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:42.721532 | orchestrator | 2025-09-20 10:59:42 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:42.722114 | orchestrator | 2025-09-20 10:59:42 | INFO  | Task 664df228-e79f-43f4-8c6b-c2871a495248 is in state SUCCESS 2025-09-20 10:59:42.723656 | orchestrator | 2025-09-20 10:59:42.723682 | orchestrator | 2025-09-20 10:59:42.723720 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-20 10:59:42.723734 | orchestrator | 2025-09-20 10:59:42.723747 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-20 10:59:42.723758 | orchestrator | Saturday 20 September 2025 10:58:31 +0000 (0:00:00.243) 0:00:00.243 **** 2025-09-20 10:59:42.723774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-20 10:59:42.723786 | orchestrator | 2025-09-20 10:59:42.723798 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-20 10:59:42.723815 | orchestrator | Saturday 20 September 2025 10:58:31 +0000 (0:00:00.227) 0:00:00.471 **** 2025-09-20 10:59:42.723827 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-20 10:59:42.723838 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-20 10:59:42.723849 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-20 10:59:42.723860 | orchestrator | 2025-09-20 10:59:42.723871 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-20 10:59:42.723882 | orchestrator | Saturday 20 September 2025 10:58:33 +0000 (0:00:01.217) 0:00:01.688 **** 2025-09-20 10:59:42.723893 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-20 10:59:42.723910 | orchestrator | 2025-09-20 10:59:42.723921 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-20 10:59:42.723932 | orchestrator | Saturday 20 September 2025 10:58:34 +0000 (0:00:01.207) 0:00:02.896 **** 2025-09-20 10:59:42.723981 | orchestrator | changed: [testbed-manager] 2025-09-20 10:59:42.723995 | orchestrator | 2025-09-20 10:59:42.724006 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-20 10:59:42.724017 | orchestrator | Saturday 20 September 2025 10:58:35 +0000 (0:00:00.947) 0:00:03.844 **** 2025-09-20 10:59:42.724028 | orchestrator | changed: [testbed-manager] 2025-09-20 10:59:42.724044 | orchestrator | 2025-09-20 10:59:42.724055 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-20 10:59:42.724066 | orchestrator | Saturday 20 September 2025 10:58:35 +0000 (0:00:00.827) 0:00:04.671 **** 2025-09-20 10:59:42.724077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-20 10:59:42.724088 | orchestrator | ok: [testbed-manager] 2025-09-20 10:59:42.724099 | orchestrator | 2025-09-20 10:59:42.724125 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-20 10:59:42.724136 | orchestrator | Saturday 20 September 2025 10:59:11 +0000 (0:00:35.254) 0:00:39.926 **** 2025-09-20 10:59:42.724146 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-20 10:59:42.724158 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-20 10:59:42.724182 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-20 10:59:42.724193 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-20 10:59:42.724204 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-20 10:59:42.724215 | orchestrator | 2025-09-20 10:59:42.724226 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-20 10:59:42.724237 | orchestrator | Saturday 20 September 2025 10:59:15 +0000 (0:00:03.987) 0:00:43.913 **** 2025-09-20 10:59:42.724248 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-20 10:59:42.724258 | orchestrator | 2025-09-20 10:59:42.724275 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-20 10:59:42.724286 | orchestrator | Saturday 20 September 2025 10:59:15 +0000 (0:00:00.490) 0:00:44.403 **** 2025-09-20 10:59:42.724297 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:59:42.724308 | orchestrator | 2025-09-20 10:59:42.724319 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-20 10:59:42.724330 | orchestrator | Saturday 20 September 2025 10:59:15 +0000 (0:00:00.133) 0:00:44.537 **** 2025-09-20 10:59:42.724349 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:59:42.724361 | orchestrator | 2025-09-20 10:59:42.724372 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-20 10:59:42.724382 | orchestrator | Saturday 20 September 2025 10:59:16 +0000 (0:00:00.320) 0:00:44.857 **** 2025-09-20 10:59:42.724393 | orchestrator | changed: [testbed-manager] 2025-09-20 10:59:42.724404 | orchestrator | 2025-09-20 10:59:42.724415 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-20 10:59:42.724426 | orchestrator | Saturday 20 September 2025 10:59:18 +0000 (0:00:02.009) 0:00:46.867 **** 2025-09-20 10:59:42.724437 | orchestrator | changed: [testbed-manager] 2025-09-20 10:59:42.724447 | orchestrator | 2025-09-20 10:59:42.724470 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-20 10:59:42.724482 | orchestrator | Saturday 20 September 2025 10:59:18 +0000 (0:00:00.794) 0:00:47.662 **** 2025-09-20 10:59:42.724492 | orchestrator | changed: [testbed-manager] 2025-09-20 10:59:42.724503 | orchestrator | 2025-09-20 10:59:42.724514 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-20 10:59:42.724532 | orchestrator | Saturday 20 September 2025 10:59:19 +0000 (0:00:00.654) 0:00:48.317 **** 2025-09-20 10:59:42.724543 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-20 10:59:42.724554 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-20 10:59:42.724565 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-20 10:59:42.724576 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-20 10:59:42.724586 | orchestrator | 2025-09-20 10:59:42.724597 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:59:42.724609 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:59:42.724621 | orchestrator | 2025-09-20 10:59:42.724632 | orchestrator | 2025-09-20 10:59:42.724697 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:59:42.724711 | orchestrator | Saturday 20 September 2025 10:59:21 +0000 (0:00:01.468) 0:00:49.785 **** 2025-09-20 10:59:42.724722 | orchestrator | =============================================================================== 2025-09-20 10:59:42.724733 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.25s 2025-09-20 10:59:42.724744 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.99s 2025-09-20 10:59:42.724754 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.01s 2025-09-20 10:59:42.724765 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2025-09-20 10:59:42.724776 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-09-20 10:59:42.724787 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.21s 2025-09-20 10:59:42.724798 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2025-09-20 10:59:42.724809 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.83s 2025-09-20 10:59:42.724819 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2025-09-20 10:59:42.724830 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-09-20 10:59:42.724841 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-09-20 10:59:42.724851 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-09-20 10:59:42.724862 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-09-20 10:59:42.724873 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-20 10:59:42.724884 | orchestrator | 2025-09-20 10:59:42.724894 | orchestrator | 2025-09-20 10:59:42.724905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:59:42.724916 | orchestrator | 2025-09-20 10:59:42.724927 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:59:42.724962 | orchestrator | Saturday 20 September 2025 10:59:25 +0000 (0:00:00.186) 0:00:00.186 **** 2025-09-20 10:59:42.724975 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.724986 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.724997 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.725008 | orchestrator | 2025-09-20 10:59:42.725019 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:59:42.725030 | orchestrator | Saturday 20 September 2025 10:59:25 +0000 (0:00:00.325) 0:00:00.512 **** 2025-09-20 10:59:42.725040 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-20 10:59:42.725051 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-20 10:59:42.725062 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-20 10:59:42.725073 | orchestrator | 2025-09-20 10:59:42.725084 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-20 10:59:42.725095 | orchestrator | 2025-09-20 10:59:42.725106 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-20 10:59:42.725116 | orchestrator | Saturday 20 September 2025 10:59:26 +0000 (0:00:00.733) 0:00:01.245 **** 2025-09-20 10:59:42.725127 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.725138 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.725156 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.725167 | orchestrator | 2025-09-20 10:59:42.725178 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:59:42.725190 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:59:42.725201 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:59:42.725212 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:59:42.725223 | orchestrator | 2025-09-20 10:59:42.725234 | orchestrator | 2025-09-20 10:59:42.725245 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:59:42.725256 | orchestrator | Saturday 20 September 2025 10:59:27 +0000 (0:00:00.659) 0:00:01.904 **** 2025-09-20 10:59:42.725267 | orchestrator | =============================================================================== 2025-09-20 10:59:42.725278 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-09-20 10:59:42.725288 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.66s 2025-09-20 10:59:42.725304 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-09-20 10:59:42.725316 | orchestrator | 2025-09-20 10:59:42.725326 | orchestrator | 2025-09-20 10:59:42.725337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:59:42.725348 | orchestrator | 2025-09-20 10:59:42.725359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:59:42.725370 | orchestrator | Saturday 20 September 2025 10:57:03 +0000 (0:00:00.284) 0:00:00.284 **** 2025-09-20 10:59:42.725381 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.725392 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.725403 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.725414 | orchestrator | 2025-09-20 10:59:42.725425 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:59:42.725436 | orchestrator | Saturday 20 September 2025 10:57:03 +0000 (0:00:00.301) 0:00:00.585 **** 2025-09-20 10:59:42.725447 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-20 10:59:42.725458 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-20 10:59:42.725469 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-20 10:59:42.725480 | orchestrator | 2025-09-20 10:59:42.725490 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-20 10:59:42.725507 | orchestrator | 2025-09-20 10:59:42.725558 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 10:59:42.725572 | orchestrator | Saturday 20 September 2025 10:57:03 +0000 (0:00:00.460) 0:00:01.046 **** 2025-09-20 10:59:42.725582 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:59:42.725594 | orchestrator | 2025-09-20 10:59:42.725605 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-20 10:59:42.725616 | orchestrator | Saturday 20 September 2025 10:57:04 +0000 (0:00:00.553) 0:00:01.600 **** 2025-09-20 10:59:42.725631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.725648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.725666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.725679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.725732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.725746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.725758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.725770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.725781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.725793 | orchestrator | 2025-09-20 10:59:42.725804 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-20 10:59:42.725815 | orchestrator | Saturday 20 September 2025 10:57:06 +0000 (0:00:01.725) 0:00:03.325 **** 2025-09-20 10:59:42.725827 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-20 10:59:42.725837 | orchestrator | 2025-09-20 10:59:42.725849 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-20 10:59:42.725860 | orchestrator | Saturday 20 September 2025 10:57:06 +0000 (0:00:00.836) 0:00:04.161 **** 2025-09-20 10:59:42.725877 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.725888 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.725898 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.725909 | orchestrator | 2025-09-20 10:59:42.725920 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-20 10:59:42.725931 | orchestrator | Saturday 20 September 2025 10:57:07 +0000 (0:00:00.505) 0:00:04.667 **** 2025-09-20 10:59:42.725942 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:59:42.725969 | orchestrator | 2025-09-20 10:59:42.725980 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 10:59:42.725991 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:00.683) 0:00:05.351 **** 2025-09-20 10:59:42.726003 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:59:42.726057 | orchestrator | 2025-09-20 10:59:42.726078 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-20 10:59:42.726090 | orchestrator | Saturday 20 September 2025 10:57:08 +0000 (0:00:00.547) 0:00:05.898 **** 2025-09-20 10:59:42.726102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.726115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.726236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.726275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726363 | orchestrator | 2025-09-20 10:59:42.726375 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-20 10:59:42.726386 | orchestrator | Saturday 20 September 2025 10:57:11 +0000 (0:00:02.953) 0:00:08.852 **** 2025-09-20 10:59:42.726402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:59:42.726422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.726434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:59:42.726445 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.726457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:59:42.726470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.726491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:59:42.726503 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.726521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:59:42.726534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.726546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:59:42.726557 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.726568 | orchestrator | 2025-09-20 10:59:42.726579 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-20 10:59:42.726590 | orchestrator | Saturday 20 September 2025 10:57:12 +0000 (0:00:00.661) 0:00:09.513 **** 2025-09-20 10:59:42.726601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:59:42.726623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.726635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:59:42.726646 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.726665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:59:42.726677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.726688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:59:42.726706 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.726718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 10:59:42.726735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.726753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 10:59:42.726765 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.726776 | orchestrator | 2025-09-20 10:59:42.726787 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-20 10:59:42.726798 | orchestrator | Saturday 20 September 2025 10:57:12 +0000 (0:00:00.690) 0:00:10.204 **** 2025-09-20 10:59:42.726810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.726822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.726845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.726863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.726937 | orchestrator | 2025-09-20 10:59:42.726966 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-20 10:59:42.726986 | orchestrator | Saturday 20 September 2025 10:57:16 +0000 (0:00:03.109) 0:00:13.313 **** 2025-09-20 10:59:42.727004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.727017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.727029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.727046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.727062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.727074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.727092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.727104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.727115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.727132 | orchestrator | 2025-09-20 10:59:42.727143 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-20 10:59:42.727154 | orchestrator | Saturday 20 September 2025 10:57:21 +0000 (0:00:05.162) 0:00:18.476 **** 2025-09-20 10:59:42.727165 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.727176 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:59:42.727187 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:59:42.727198 | orchestrator | 2025-09-20 10:59:42.727209 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-20 10:59:42.727220 | orchestrator | Saturday 20 September 2025 10:57:22 +0000 (0:00:01.448) 0:00:19.924 **** 2025-09-20 10:59:42.727231 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.727241 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.727252 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.727263 | orchestrator | 2025-09-20 10:59:42.727274 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-20 10:59:42.727285 | orchestrator | Saturday 20 September 2025 10:57:23 +0000 (0:00:00.513) 0:00:20.437 **** 2025-09-20 10:59:42.727296 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.727306 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.727317 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.727328 | orchestrator | 2025-09-20 10:59:42.727339 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-20 10:59:42.727350 | orchestrator | Saturday 20 September 2025 10:57:23 +0000 (0:00:00.301) 0:00:20.739 **** 2025-09-20 10:59:42.727360 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.727371 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.727382 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.727393 | orchestrator | 2025-09-20 10:59:42.727403 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-20 10:59:42.727414 | orchestrator | Saturday 20 September 2025 10:57:24 +0000 (0:00:00.556) 0:00:21.296 **** 2025-09-20 10:59:42.727430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.727449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.727467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.727479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.727491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.727507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 10:59:42.727526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.727544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.727555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.727567 | orchestrator | 2025-09-20 10:59:42.727578 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 10:59:42.727589 | orchestrator | Saturday 20 September 2025 10:57:26 +0000 (0:00:02.314) 0:00:23.611 **** 2025-09-20 10:59:42.727600 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.727611 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.727621 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.727632 | orchestrator | 2025-09-20 10:59:42.727643 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-20 10:59:42.727654 | orchestrator | Saturday 20 September 2025 10:57:26 +0000 (0:00:00.337) 0:00:23.949 **** 2025-09-20 10:59:42.727665 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-20 10:59:42.727676 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-20 10:59:42.727687 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-20 10:59:42.727697 | orchestrator | 2025-09-20 10:59:42.727708 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-20 10:59:42.727719 | orchestrator | Saturday 20 September 2025 10:57:28 +0000 (0:00:01.694) 0:00:25.643 **** 2025-09-20 10:59:42.727730 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:59:42.727741 | orchestrator | 2025-09-20 10:59:42.727752 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-20 10:59:42.727762 | orchestrator | Saturday 20 September 2025 10:57:29 +0000 (0:00:00.906) 0:00:26.550 **** 2025-09-20 10:59:42.727773 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.727784 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.727795 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.727806 | orchestrator | 2025-09-20 10:59:42.727817 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-20 10:59:42.727828 | orchestrator | Saturday 20 September 2025 10:57:30 +0000 (0:00:00.795) 0:00:27.345 **** 2025-09-20 10:59:42.727838 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 10:59:42.727849 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:59:42.727860 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 10:59:42.727871 | orchestrator | 2025-09-20 10:59:42.727881 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-20 10:59:42.727897 | orchestrator | Saturday 20 September 2025 10:57:31 +0000 (0:00:00.936) 0:00:28.282 **** 2025-09-20 10:59:42.727908 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.727925 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.727936 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.727971 | orchestrator | 2025-09-20 10:59:42.727984 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-20 10:59:42.727995 | orchestrator | Saturday 20 September 2025 10:57:31 +0000 (0:00:00.272) 0:00:28.554 **** 2025-09-20 10:59:42.728005 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-20 10:59:42.728016 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-20 10:59:42.728027 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-20 10:59:42.728038 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-20 10:59:42.728049 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-20 10:59:42.728066 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-20 10:59:42.728078 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-20 10:59:42.728088 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-20 10:59:42.728099 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-20 10:59:42.728110 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-20 10:59:42.728121 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-20 10:59:42.728132 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-20 10:59:42.728142 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-20 10:59:42.728153 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-20 10:59:42.728164 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-20 10:59:42.728175 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 10:59:42.728186 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 10:59:42.728196 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 10:59:42.728207 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 10:59:42.728223 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 10:59:42.728241 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 10:59:42.728258 | orchestrator | 2025-09-20 10:59:42.728278 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-20 10:59:42.728297 | orchestrator | Saturday 20 September 2025 10:57:39 +0000 (0:00:08.348) 0:00:36.903 **** 2025-09-20 10:59:42.728315 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 10:59:42.728332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 10:59:42.728350 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 10:59:42.728366 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 10:59:42.728383 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 10:59:42.728401 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 10:59:42.728419 | orchestrator | 2025-09-20 10:59:42.728449 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-20 10:59:42.728467 | orchestrator | Saturday 20 September 2025 10:57:42 +0000 (0:00:02.687) 0:00:39.590 **** 2025-09-20 10:59:42.728497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.728532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.728550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 10:59:42.728562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.728574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.728593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 10:59:42.728609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.728628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.728639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 10:59:42.728650 | orchestrator | 2025-09-20 10:59:42.728662 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 10:59:42.728672 | orchestrator | Saturday 20 September 2025 10:57:44 +0000 (0:00:02.222) 0:00:41.813 **** 2025-09-20 10:59:42.728683 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.728694 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.728705 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.728715 | orchestrator | 2025-09-20 10:59:42.728726 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-20 10:59:42.728737 | orchestrator | Saturday 20 September 2025 10:57:44 +0000 (0:00:00.317) 0:00:42.131 **** 2025-09-20 10:59:42.728748 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.728759 | orchestrator | 2025-09-20 10:59:42.728769 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-20 10:59:42.728780 | orchestrator | Saturday 20 September 2025 10:57:46 +0000 (0:00:02.007) 0:00:44.138 **** 2025-09-20 10:59:42.728797 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.728808 | orchestrator | 2025-09-20 10:59:42.728819 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-20 10:59:42.728830 | orchestrator | Saturday 20 September 2025 10:57:48 +0000 (0:00:01.982) 0:00:46.120 **** 2025-09-20 10:59:42.728841 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.728852 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.728862 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.728873 | orchestrator | 2025-09-20 10:59:42.728884 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-20 10:59:42.728894 | orchestrator | Saturday 20 September 2025 10:57:49 +0000 (0:00:00.842) 0:00:46.963 **** 2025-09-20 10:59:42.728905 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.728915 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.728926 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.728937 | orchestrator | 2025-09-20 10:59:42.728966 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-20 10:59:42.728979 | orchestrator | Saturday 20 September 2025 10:57:50 +0000 (0:00:00.580) 0:00:47.543 **** 2025-09-20 10:59:42.728989 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.729000 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.729011 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.729022 | orchestrator | 2025-09-20 10:59:42.729032 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-20 10:59:42.729043 | orchestrator | Saturday 20 September 2025 10:57:50 +0000 (0:00:00.357) 0:00:47.901 **** 2025-09-20 10:59:42.729054 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.729065 | orchestrator | 2025-09-20 10:59:42.729076 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-20 10:59:42.729087 | orchestrator | Saturday 20 September 2025 10:58:03 +0000 (0:00:12.946) 0:01:00.848 **** 2025-09-20 10:59:42.729098 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.729108 | orchestrator | 2025-09-20 10:59:42.729119 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-20 10:59:42.729130 | orchestrator | Saturday 20 September 2025 10:58:11 +0000 (0:00:07.826) 0:01:08.674 **** 2025-09-20 10:59:42.729141 | orchestrator | 2025-09-20 10:59:42.729152 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-20 10:59:42.729162 | orchestrator | Saturday 20 September 2025 10:58:11 +0000 (0:00:00.062) 0:01:08.736 **** 2025-09-20 10:59:42.729173 | orchestrator | 2025-09-20 10:59:42.729184 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-20 10:59:42.729203 | orchestrator | Saturday 20 September 2025 10:58:11 +0000 (0:00:00.059) 0:01:08.796 **** 2025-09-20 10:59:42.729214 | orchestrator | 2025-09-20 10:59:42.729225 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-20 10:59:42.729236 | orchestrator | Saturday 20 September 2025 10:58:11 +0000 (0:00:00.061) 0:01:08.858 **** 2025-09-20 10:59:42.729246 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.729257 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:59:42.729268 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:59:42.729278 | orchestrator | 2025-09-20 10:59:42.729289 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-20 10:59:42.729300 | orchestrator | Saturday 20 September 2025 10:58:37 +0000 (0:00:25.672) 0:01:34.531 **** 2025-09-20 10:59:42.729311 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.729322 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:59:42.729332 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:59:42.729343 | orchestrator | 2025-09-20 10:59:42.729354 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-20 10:59:42.729365 | orchestrator | Saturday 20 September 2025 10:58:47 +0000 (0:00:09.784) 0:01:44.315 **** 2025-09-20 10:59:42.729376 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.729387 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:59:42.729404 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:59:42.729421 | orchestrator | 2025-09-20 10:59:42.729432 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 10:59:42.729443 | orchestrator | Saturday 20 September 2025 10:58:54 +0000 (0:00:07.100) 0:01:51.415 **** 2025-09-20 10:59:42.729454 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:59:42.729465 | orchestrator | 2025-09-20 10:59:42.729476 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-20 10:59:42.729487 | orchestrator | Saturday 20 September 2025 10:58:54 +0000 (0:00:00.715) 0:01:52.130 **** 2025-09-20 10:59:42.729497 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:59:42.729508 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.729519 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:59:42.729530 | orchestrator | 2025-09-20 10:59:42.729540 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-20 10:59:42.729551 | orchestrator | Saturday 20 September 2025 10:58:55 +0000 (0:00:00.732) 0:01:52.863 **** 2025-09-20 10:59:42.729562 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:59:42.729572 | orchestrator | 2025-09-20 10:59:42.729583 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-20 10:59:42.729594 | orchestrator | Saturday 20 September 2025 10:58:57 +0000 (0:00:01.732) 0:01:54.596 **** 2025-09-20 10:59:42.729605 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-20 10:59:42.729616 | orchestrator | 2025-09-20 10:59:42.729627 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-20 10:59:42.729637 | orchestrator | Saturday 20 September 2025 10:59:06 +0000 (0:00:08.746) 0:02:03.342 **** 2025-09-20 10:59:42.729648 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-20 10:59:42.729659 | orchestrator | 2025-09-20 10:59:42.729670 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-20 10:59:42.729680 | orchestrator | Saturday 20 September 2025 10:59:26 +0000 (0:00:20.042) 0:02:23.385 **** 2025-09-20 10:59:42.729691 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-20 10:59:42.729702 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-20 10:59:42.729713 | orchestrator | 2025-09-20 10:59:42.729724 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-20 10:59:42.729734 | orchestrator | Saturday 20 September 2025 10:59:37 +0000 (0:00:11.841) 0:02:35.226 **** 2025-09-20 10:59:42.729745 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.729756 | orchestrator | 2025-09-20 10:59:42.729767 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-20 10:59:42.729778 | orchestrator | Saturday 20 September 2025 10:59:38 +0000 (0:00:00.151) 0:02:35.378 **** 2025-09-20 10:59:42.729788 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.729799 | orchestrator | 2025-09-20 10:59:42.729810 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-20 10:59:42.729821 | orchestrator | Saturday 20 September 2025 10:59:38 +0000 (0:00:00.118) 0:02:35.496 **** 2025-09-20 10:59:42.729831 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.729842 | orchestrator | 2025-09-20 10:59:42.729853 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-20 10:59:42.729864 | orchestrator | Saturday 20 September 2025 10:59:38 +0000 (0:00:00.142) 0:02:35.639 **** 2025-09-20 10:59:42.729875 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.729885 | orchestrator | 2025-09-20 10:59:42.729896 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-20 10:59:42.729907 | orchestrator | Saturday 20 September 2025 10:59:38 +0000 (0:00:00.561) 0:02:36.200 **** 2025-09-20 10:59:42.729918 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:59:42.729929 | orchestrator | 2025-09-20 10:59:42.729939 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 10:59:42.729972 | orchestrator | Saturday 20 September 2025 10:59:41 +0000 (0:00:02.802) 0:02:39.003 **** 2025-09-20 10:59:42.729983 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:59:42.729994 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:59:42.730005 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:59:42.730057 | orchestrator | 2025-09-20 10:59:42.730070 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:59:42.730081 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-20 10:59:42.730098 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-20 10:59:42.730109 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-20 10:59:42.730120 | orchestrator | 2025-09-20 10:59:42.730131 | orchestrator | 2025-09-20 10:59:42.730143 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:59:42.730154 | orchestrator | Saturday 20 September 2025 10:59:42 +0000 (0:00:00.439) 0:02:39.442 **** 2025-09-20 10:59:42.730164 | orchestrator | =============================================================================== 2025-09-20 10:59:42.730175 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.67s 2025-09-20 10:59:42.730186 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.04s 2025-09-20 10:59:42.730197 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.95s 2025-09-20 10:59:42.730207 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 11.84s 2025-09-20 10:59:42.730218 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.78s 2025-09-20 10:59:42.730236 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.75s 2025-09-20 10:59:42.730247 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.35s 2025-09-20 10:59:42.730258 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 7.83s 2025-09-20 10:59:42.730268 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.10s 2025-09-20 10:59:42.730279 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.16s 2025-09-20 10:59:42.730290 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.11s 2025-09-20 10:59:42.730301 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.95s 2025-09-20 10:59:42.730312 | orchestrator | keystone : Creating default user role ----------------------------------- 2.80s 2025-09-20 10:59:42.730323 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.69s 2025-09-20 10:59:42.730333 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.31s 2025-09-20 10:59:42.730344 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.22s 2025-09-20 10:59:42.730355 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.01s 2025-09-20 10:59:42.730366 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.98s 2025-09-20 10:59:42.730377 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.73s 2025-09-20 10:59:42.730387 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.73s 2025-09-20 10:59:42.730398 | orchestrator | 2025-09-20 10:59:42 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:42.730409 | orchestrator | 2025-09-20 10:59:42 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:42.730420 | orchestrator | 2025-09-20 10:59:42 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:42.730432 | orchestrator | 2025-09-20 10:59:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:45.754274 | orchestrator | 2025-09-20 10:59:45 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:45.757251 | orchestrator | 2025-09-20 10:59:45 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 10:59:45.759395 | orchestrator | 2025-09-20 10:59:45 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:45.760811 | orchestrator | 2025-09-20 10:59:45 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:45.762464 | orchestrator | 2025-09-20 10:59:45 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:45.762502 | orchestrator | 2025-09-20 10:59:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:48.784725 | orchestrator | 2025-09-20 10:59:48 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:48.785397 | orchestrator | 2025-09-20 10:59:48 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 10:59:48.785429 | orchestrator | 2025-09-20 10:59:48 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:48.785441 | orchestrator | 2025-09-20 10:59:48 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:48.786243 | orchestrator | 2025-09-20 10:59:48 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:48.786266 | orchestrator | 2025-09-20 10:59:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:51.877043 | orchestrator | 2025-09-20 10:59:51 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:51.877140 | orchestrator | 2025-09-20 10:59:51 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 10:59:51.877154 | orchestrator | 2025-09-20 10:59:51 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:51.877164 | orchestrator | 2025-09-20 10:59:51 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:51.877174 | orchestrator | 2025-09-20 10:59:51 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:51.877183 | orchestrator | 2025-09-20 10:59:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:54.859062 | orchestrator | 2025-09-20 10:59:54 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:54.860086 | orchestrator | 2025-09-20 10:59:54 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 10:59:54.860714 | orchestrator | 2025-09-20 10:59:54 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:54.861335 | orchestrator | 2025-09-20 10:59:54 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:54.862603 | orchestrator | 2025-09-20 10:59:54 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:54.862625 | orchestrator | 2025-09-20 10:59:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:59:57.892831 | orchestrator | 2025-09-20 10:59:57 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 10:59:57.892933 | orchestrator | 2025-09-20 10:59:57 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 10:59:57.893340 | orchestrator | 2025-09-20 10:59:57 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 10:59:57.893979 | orchestrator | 2025-09-20 10:59:57 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 10:59:57.894625 | orchestrator | 2025-09-20 10:59:57 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 10:59:57.894646 | orchestrator | 2025-09-20 10:59:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:00.915772 | orchestrator | 2025-09-20 11:00:00 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state STARTED 2025-09-20 11:00:00.917755 | orchestrator | 2025-09-20 11:00:00 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:00.919096 | orchestrator | 2025-09-20 11:00:00 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:00.919588 | orchestrator | 2025-09-20 11:00:00 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:00.920194 | orchestrator | 2025-09-20 11:00:00 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:00.920407 | orchestrator | 2025-09-20 11:00:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:04.225055 | orchestrator | 2025-09-20 11:00:04 | INFO  | Task 775d8439-aa0a-4d3a-89b6-91917e51f719 is in state SUCCESS 2025-09-20 11:00:04.225141 | orchestrator | 2025-09-20 11:00:04 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:04.225156 | orchestrator | 2025-09-20 11:00:04 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:04.225481 | orchestrator | 2025-09-20 11:00:04 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:04.226258 | orchestrator | 2025-09-20 11:00:04 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:04.226330 | orchestrator | 2025-09-20 11:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:07.248944 | orchestrator | 2025-09-20 11:00:07 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:07.249104 | orchestrator | 2025-09-20 11:00:07 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:07.249129 | orchestrator | 2025-09-20 11:00:07 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:07.249620 | orchestrator | 2025-09-20 11:00:07 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:07.251946 | orchestrator | 2025-09-20 11:00:07 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:07.251964 | orchestrator | 2025-09-20 11:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:10.271787 | orchestrator | 2025-09-20 11:00:10 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:10.272070 | orchestrator | 2025-09-20 11:00:10 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:10.272704 | orchestrator | 2025-09-20 11:00:10 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:10.273418 | orchestrator | 2025-09-20 11:00:10 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:10.274384 | orchestrator | 2025-09-20 11:00:10 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:10.274410 | orchestrator | 2025-09-20 11:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:13.305568 | orchestrator | 2025-09-20 11:00:13 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:13.306092 | orchestrator | 2025-09-20 11:00:13 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:13.306877 | orchestrator | 2025-09-20 11:00:13 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:13.307605 | orchestrator | 2025-09-20 11:00:13 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:13.309523 | orchestrator | 2025-09-20 11:00:13 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:13.309608 | orchestrator | 2025-09-20 11:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:16.361545 | orchestrator | 2025-09-20 11:00:16 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:16.362299 | orchestrator | 2025-09-20 11:00:16 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:16.363203 | orchestrator | 2025-09-20 11:00:16 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:16.364155 | orchestrator | 2025-09-20 11:00:16 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:16.364947 | orchestrator | 2025-09-20 11:00:16 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:16.365208 | orchestrator | 2025-09-20 11:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:19.402956 | orchestrator | 2025-09-20 11:00:19 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:19.403256 | orchestrator | 2025-09-20 11:00:19 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:19.405371 | orchestrator | 2025-09-20 11:00:19 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:19.406249 | orchestrator | 2025-09-20 11:00:19 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:19.407048 | orchestrator | 2025-09-20 11:00:19 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:19.407073 | orchestrator | 2025-09-20 11:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:22.438591 | orchestrator | 2025-09-20 11:00:22 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:22.438799 | orchestrator | 2025-09-20 11:00:22 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:22.439353 | orchestrator | 2025-09-20 11:00:22 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:22.439861 | orchestrator | 2025-09-20 11:00:22 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:22.440592 | orchestrator | 2025-09-20 11:00:22 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:22.440616 | orchestrator | 2025-09-20 11:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:25.469948 | orchestrator | 2025-09-20 11:00:25 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:25.470134 | orchestrator | 2025-09-20 11:00:25 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:25.470794 | orchestrator | 2025-09-20 11:00:25 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:25.471435 | orchestrator | 2025-09-20 11:00:25 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:25.472255 | orchestrator | 2025-09-20 11:00:25 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:25.472290 | orchestrator | 2025-09-20 11:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:28.493247 | orchestrator | 2025-09-20 11:00:28 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:28.493353 | orchestrator | 2025-09-20 11:00:28 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:28.493820 | orchestrator | 2025-09-20 11:00:28 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:28.494495 | orchestrator | 2025-09-20 11:00:28 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:28.495123 | orchestrator | 2025-09-20 11:00:28 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:28.495145 | orchestrator | 2025-09-20 11:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:31.520280 | orchestrator | 2025-09-20 11:00:31 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:31.522217 | orchestrator | 2025-09-20 11:00:31 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:31.522876 | orchestrator | 2025-09-20 11:00:31 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:31.523673 | orchestrator | 2025-09-20 11:00:31 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:31.524675 | orchestrator | 2025-09-20 11:00:31 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:31.524725 | orchestrator | 2025-09-20 11:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:34.548769 | orchestrator | 2025-09-20 11:00:34 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:34.548981 | orchestrator | 2025-09-20 11:00:34 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:34.549778 | orchestrator | 2025-09-20 11:00:34 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:34.550308 | orchestrator | 2025-09-20 11:00:34 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:34.551059 | orchestrator | 2025-09-20 11:00:34 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:34.551101 | orchestrator | 2025-09-20 11:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:37.581759 | orchestrator | 2025-09-20 11:00:37 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:37.582959 | orchestrator | 2025-09-20 11:00:37 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:37.584169 | orchestrator | 2025-09-20 11:00:37 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:37.586702 | orchestrator | 2025-09-20 11:00:37 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state STARTED 2025-09-20 11:00:37.587693 | orchestrator | 2025-09-20 11:00:37 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:37.587725 | orchestrator | 2025-09-20 11:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:40.619044 | orchestrator | 2025-09-20 11:00:40 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:40.620108 | orchestrator | 2025-09-20 11:00:40 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:40.620845 | orchestrator | 2025-09-20 11:00:40 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:40.621310 | orchestrator | 2025-09-20 11:00:40 | INFO  | Task 4f4be993-5dc4-4778-831e-3ff97f61ff53 is in state SUCCESS 2025-09-20 11:00:40.621789 | orchestrator | 2025-09-20 11:00:40.621816 | orchestrator | 2025-09-20 11:00:40.621851 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:00:40.621863 | orchestrator | 2025-09-20 11:00:40.621875 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:00:40.621886 | orchestrator | Saturday 20 September 2025 10:59:32 +0000 (0:00:00.311) 0:00:00.311 **** 2025-09-20 11:00:40.621897 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:00:40.621909 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:00:40.621920 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:00:40.621931 | orchestrator | ok: [testbed-manager] 2025-09-20 11:00:40.621942 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:00:40.621953 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:00:40.621964 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:00:40.621976 | orchestrator | 2025-09-20 11:00:40.621987 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:00:40.621998 | orchestrator | Saturday 20 September 2025 10:59:33 +0000 (0:00:00.988) 0:00:01.300 **** 2025-09-20 11:00:40.622009 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622093 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622106 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622118 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622169 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622182 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622193 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-20 11:00:40.622204 | orchestrator | 2025-09-20 11:00:40.622352 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-20 11:00:40.622370 | orchestrator | 2025-09-20 11:00:40.622381 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-20 11:00:40.622392 | orchestrator | Saturday 20 September 2025 10:59:35 +0000 (0:00:01.389) 0:00:02.690 **** 2025-09-20 11:00:40.622404 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:00:40.622416 | orchestrator | 2025-09-20 11:00:40.622427 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-20 11:00:40.622438 | orchestrator | Saturday 20 September 2025 10:59:37 +0000 (0:00:02.065) 0:00:04.756 **** 2025-09-20 11:00:40.622449 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-20 11:00:40.622459 | orchestrator | 2025-09-20 11:00:40.622470 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-20 11:00:40.622481 | orchestrator | Saturday 20 September 2025 10:59:40 +0000 (0:00:03.303) 0:00:08.060 **** 2025-09-20 11:00:40.622493 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-20 11:00:40.622505 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-20 11:00:40.622516 | orchestrator | 2025-09-20 11:00:40.622527 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-20 11:00:40.622538 | orchestrator | Saturday 20 September 2025 10:59:46 +0000 (0:00:05.352) 0:00:13.412 **** 2025-09-20 11:00:40.622549 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:00:40.622560 | orchestrator | 2025-09-20 11:00:40.622571 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-20 11:00:40.622582 | orchestrator | Saturday 20 September 2025 10:59:48 +0000 (0:00:02.671) 0:00:16.083 **** 2025-09-20 11:00:40.622593 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:00:40.622604 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-20 11:00:40.622615 | orchestrator | 2025-09-20 11:00:40.622626 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-20 11:00:40.622647 | orchestrator | Saturday 20 September 2025 10:59:52 +0000 (0:00:03.450) 0:00:19.534 **** 2025-09-20 11:00:40.622658 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:00:40.622669 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-20 11:00:40.622680 | orchestrator | 2025-09-20 11:00:40.622691 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-20 11:00:40.622702 | orchestrator | Saturday 20 September 2025 10:59:58 +0000 (0:00:05.915) 0:00:25.450 **** 2025-09-20 11:00:40.622713 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-20 11:00:40.622723 | orchestrator | 2025-09-20 11:00:40.622734 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:00:40.622745 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622756 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622768 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622778 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622789 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622814 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622826 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.622837 | orchestrator | 2025-09-20 11:00:40.622848 | orchestrator | 2025-09-20 11:00:40.622858 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:00:40.622869 | orchestrator | Saturday 20 September 2025 11:00:03 +0000 (0:00:05.472) 0:00:30.922 **** 2025-09-20 11:00:40.622880 | orchestrator | =============================================================================== 2025-09-20 11:00:40.622891 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.92s 2025-09-20 11:00:40.622902 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.47s 2025-09-20 11:00:40.622913 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.35s 2025-09-20 11:00:40.622923 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.45s 2025-09-20 11:00:40.622934 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.30s 2025-09-20 11:00:40.622945 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.67s 2025-09-20 11:00:40.622962 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.07s 2025-09-20 11:00:40.622973 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.39s 2025-09-20 11:00:40.622984 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2025-09-20 11:00:40.622995 | orchestrator | 2025-09-20 11:00:40.623005 | orchestrator | 2025-09-20 11:00:40.623016 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-20 11:00:40.623027 | orchestrator | 2025-09-20 11:00:40.623038 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-20 11:00:40.623049 | orchestrator | Saturday 20 September 2025 10:59:25 +0000 (0:00:00.313) 0:00:00.313 **** 2025-09-20 11:00:40.623060 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623091 | orchestrator | 2025-09-20 11:00:40.623103 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-20 11:00:40.623114 | orchestrator | Saturday 20 September 2025 10:59:27 +0000 (0:00:01.989) 0:00:02.303 **** 2025-09-20 11:00:40.623133 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623145 | orchestrator | 2025-09-20 11:00:40.623156 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-20 11:00:40.623168 | orchestrator | Saturday 20 September 2025 10:59:28 +0000 (0:00:00.983) 0:00:03.286 **** 2025-09-20 11:00:40.623179 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623191 | orchestrator | 2025-09-20 11:00:40.623203 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-20 11:00:40.623214 | orchestrator | Saturday 20 September 2025 10:59:29 +0000 (0:00:01.247) 0:00:04.534 **** 2025-09-20 11:00:40.623226 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623238 | orchestrator | 2025-09-20 11:00:40.623249 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-20 11:00:40.623261 | orchestrator | Saturday 20 September 2025 10:59:31 +0000 (0:00:01.881) 0:00:06.416 **** 2025-09-20 11:00:40.623273 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623284 | orchestrator | 2025-09-20 11:00:40.623295 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-20 11:00:40.623307 | orchestrator | Saturday 20 September 2025 10:59:32 +0000 (0:00:01.065) 0:00:07.481 **** 2025-09-20 11:00:40.623318 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623330 | orchestrator | 2025-09-20 11:00:40.623342 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-20 11:00:40.623353 | orchestrator | Saturday 20 September 2025 10:59:33 +0000 (0:00:01.027) 0:00:08.508 **** 2025-09-20 11:00:40.623365 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623376 | orchestrator | 2025-09-20 11:00:40.623388 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-20 11:00:40.623400 | orchestrator | Saturday 20 September 2025 10:59:36 +0000 (0:00:02.205) 0:00:10.713 **** 2025-09-20 11:00:40.623411 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623422 | orchestrator | 2025-09-20 11:00:40.623434 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-20 11:00:40.623446 | orchestrator | Saturday 20 September 2025 10:59:37 +0000 (0:00:01.409) 0:00:12.122 **** 2025-09-20 11:00:40.623457 | orchestrator | changed: [testbed-manager] 2025-09-20 11:00:40.623469 | orchestrator | 2025-09-20 11:00:40.623480 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-20 11:00:40.623492 | orchestrator | Saturday 20 September 2025 11:00:14 +0000 (0:00:37.530) 0:00:49.653 **** 2025-09-20 11:00:40.623503 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:00:40.623514 | orchestrator | 2025-09-20 11:00:40.623526 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-20 11:00:40.623538 | orchestrator | 2025-09-20 11:00:40.623549 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-20 11:00:40.623561 | orchestrator | Saturday 20 September 2025 11:00:15 +0000 (0:00:00.182) 0:00:49.836 **** 2025-09-20 11:00:40.623572 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:00:40.623584 | orchestrator | 2025-09-20 11:00:40.623596 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-20 11:00:40.623607 | orchestrator | 2025-09-20 11:00:40.623619 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-20 11:00:40.623630 | orchestrator | Saturday 20 September 2025 11:00:16 +0000 (0:00:01.653) 0:00:51.489 **** 2025-09-20 11:00:40.623642 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:00:40.623653 | orchestrator | 2025-09-20 11:00:40.623665 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-20 11:00:40.623677 | orchestrator | 2025-09-20 11:00:40.623688 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-20 11:00:40.623700 | orchestrator | Saturday 20 September 2025 11:00:27 +0000 (0:00:11.131) 0:01:02.620 **** 2025-09-20 11:00:40.623711 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:00:40.623723 | orchestrator | 2025-09-20 11:00:40.623741 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:00:40.623759 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 11:00:40.623771 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.623783 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.623795 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:00:40.623806 | orchestrator | 2025-09-20 11:00:40.623818 | orchestrator | 2025-09-20 11:00:40.623830 | orchestrator | 2025-09-20 11:00:40.623841 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:00:40.623853 | orchestrator | Saturday 20 September 2025 11:00:38 +0000 (0:00:10.962) 0:01:13.583 **** 2025-09-20 11:00:40.623869 | orchestrator | =============================================================================== 2025-09-20 11:00:40.623881 | orchestrator | Create admin user ------------------------------------------------------ 37.53s 2025-09-20 11:00:40.623893 | orchestrator | Restart ceph manager service ------------------------------------------- 23.75s 2025-09-20 11:00:40.623904 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.21s 2025-09-20 11:00:40.623916 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.99s 2025-09-20 11:00:40.623927 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.88s 2025-09-20 11:00:40.623939 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.41s 2025-09-20 11:00:40.623950 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.25s 2025-09-20 11:00:40.623962 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2025-09-20 11:00:40.623973 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.03s 2025-09-20 11:00:40.623985 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.98s 2025-09-20 11:00:40.623996 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-09-20 11:00:40.624008 | orchestrator | 2025-09-20 11:00:40 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:40.624020 | orchestrator | 2025-09-20 11:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:43.657473 | orchestrator | 2025-09-20 11:00:43 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:43.658045 | orchestrator | 2025-09-20 11:00:43 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:43.658655 | orchestrator | 2025-09-20 11:00:43 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:43.660551 | orchestrator | 2025-09-20 11:00:43 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:43.660568 | orchestrator | 2025-09-20 11:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:46.701938 | orchestrator | 2025-09-20 11:00:46 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:46.703749 | orchestrator | 2025-09-20 11:00:46 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:46.703790 | orchestrator | 2025-09-20 11:00:46 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:46.704409 | orchestrator | 2025-09-20 11:00:46 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:46.704449 | orchestrator | 2025-09-20 11:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:49.730806 | orchestrator | 2025-09-20 11:00:49 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:49.731231 | orchestrator | 2025-09-20 11:00:49 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:49.731912 | orchestrator | 2025-09-20 11:00:49 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:49.732746 | orchestrator | 2025-09-20 11:00:49 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:49.732769 | orchestrator | 2025-09-20 11:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:52.758541 | orchestrator | 2025-09-20 11:00:52 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:52.758630 | orchestrator | 2025-09-20 11:00:52 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:52.759058 | orchestrator | 2025-09-20 11:00:52 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:52.759521 | orchestrator | 2025-09-20 11:00:52 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:52.759542 | orchestrator | 2025-09-20 11:00:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:55.782574 | orchestrator | 2025-09-20 11:00:55 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:55.782742 | orchestrator | 2025-09-20 11:00:55 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:55.783130 | orchestrator | 2025-09-20 11:00:55 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:55.784857 | orchestrator | 2025-09-20 11:00:55 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:55.784872 | orchestrator | 2025-09-20 11:00:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:00:58.821038 | orchestrator | 2025-09-20 11:00:58 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:00:58.821178 | orchestrator | 2025-09-20 11:00:58 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:00:58.821550 | orchestrator | 2025-09-20 11:00:58 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:00:58.821851 | orchestrator | 2025-09-20 11:00:58 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:00:58.822090 | orchestrator | 2025-09-20 11:00:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:01.851182 | orchestrator | 2025-09-20 11:01:01 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:01.852111 | orchestrator | 2025-09-20 11:01:01 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:01.852443 | orchestrator | 2025-09-20 11:01:01 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:01.852990 | orchestrator | 2025-09-20 11:01:01 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:01.853014 | orchestrator | 2025-09-20 11:01:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:04.871899 | orchestrator | 2025-09-20 11:01:04 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:04.871988 | orchestrator | 2025-09-20 11:01:04 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:04.872913 | orchestrator | 2025-09-20 11:01:04 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:04.873472 | orchestrator | 2025-09-20 11:01:04 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:04.873517 | orchestrator | 2025-09-20 11:01:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:07.894000 | orchestrator | 2025-09-20 11:01:07 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:07.894186 | orchestrator | 2025-09-20 11:01:07 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:07.895501 | orchestrator | 2025-09-20 11:01:07 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:07.895930 | orchestrator | 2025-09-20 11:01:07 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:07.895973 | orchestrator | 2025-09-20 11:01:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:10.930235 | orchestrator | 2025-09-20 11:01:10 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:10.930346 | orchestrator | 2025-09-20 11:01:10 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:10.930630 | orchestrator | 2025-09-20 11:01:10 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:10.931267 | orchestrator | 2025-09-20 11:01:10 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:10.931300 | orchestrator | 2025-09-20 11:01:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:13.959633 | orchestrator | 2025-09-20 11:01:13 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:13.960343 | orchestrator | 2025-09-20 11:01:13 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:13.961759 | orchestrator | 2025-09-20 11:01:13 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:13.962643 | orchestrator | 2025-09-20 11:01:13 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:13.962673 | orchestrator | 2025-09-20 11:01:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:17.009865 | orchestrator | 2025-09-20 11:01:17 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:17.010366 | orchestrator | 2025-09-20 11:01:17 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:17.011508 | orchestrator | 2025-09-20 11:01:17 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:17.012923 | orchestrator | 2025-09-20 11:01:17 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:17.013004 | orchestrator | 2025-09-20 11:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:20.050774 | orchestrator | 2025-09-20 11:01:20 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:20.051843 | orchestrator | 2025-09-20 11:01:20 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:20.052899 | orchestrator | 2025-09-20 11:01:20 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:20.054353 | orchestrator | 2025-09-20 11:01:20 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:20.054385 | orchestrator | 2025-09-20 11:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:23.099631 | orchestrator | 2025-09-20 11:01:23 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:23.102842 | orchestrator | 2025-09-20 11:01:23 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:23.105571 | orchestrator | 2025-09-20 11:01:23 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:23.107986 | orchestrator | 2025-09-20 11:01:23 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:23.108013 | orchestrator | 2025-09-20 11:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:26.164826 | orchestrator | 2025-09-20 11:01:26 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:26.174305 | orchestrator | 2025-09-20 11:01:26 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:26.175373 | orchestrator | 2025-09-20 11:01:26 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:26.177097 | orchestrator | 2025-09-20 11:01:26 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:26.177123 | orchestrator | 2025-09-20 11:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:29.214317 | orchestrator | 2025-09-20 11:01:29 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:29.215207 | orchestrator | 2025-09-20 11:01:29 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:29.216748 | orchestrator | 2025-09-20 11:01:29 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:29.218745 | orchestrator | 2025-09-20 11:01:29 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:29.218831 | orchestrator | 2025-09-20 11:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:32.260931 | orchestrator | 2025-09-20 11:01:32 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:32.261046 | orchestrator | 2025-09-20 11:01:32 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:32.261364 | orchestrator | 2025-09-20 11:01:32 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:32.262713 | orchestrator | 2025-09-20 11:01:32 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:32.262741 | orchestrator | 2025-09-20 11:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:35.304573 | orchestrator | 2025-09-20 11:01:35 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:35.305829 | orchestrator | 2025-09-20 11:01:35 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:35.307479 | orchestrator | 2025-09-20 11:01:35 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:35.308197 | orchestrator | 2025-09-20 11:01:35 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:35.308227 | orchestrator | 2025-09-20 11:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:38.346386 | orchestrator | 2025-09-20 11:01:38 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:38.346523 | orchestrator | 2025-09-20 11:01:38 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:38.347733 | orchestrator | 2025-09-20 11:01:38 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:38.348001 | orchestrator | 2025-09-20 11:01:38 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:38.348032 | orchestrator | 2025-09-20 11:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:41.382112 | orchestrator | 2025-09-20 11:01:41 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:41.383004 | orchestrator | 2025-09-20 11:01:41 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:41.384214 | orchestrator | 2025-09-20 11:01:41 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:41.385439 | orchestrator | 2025-09-20 11:01:41 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:41.385469 | orchestrator | 2025-09-20 11:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:44.424151 | orchestrator | 2025-09-20 11:01:44 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:44.424396 | orchestrator | 2025-09-20 11:01:44 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:44.425388 | orchestrator | 2025-09-20 11:01:44 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:44.425429 | orchestrator | 2025-09-20 11:01:44 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:44.425533 | orchestrator | 2025-09-20 11:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:47.465230 | orchestrator | 2025-09-20 11:01:47 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:47.465649 | orchestrator | 2025-09-20 11:01:47 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:47.467307 | orchestrator | 2025-09-20 11:01:47 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:47.471284 | orchestrator | 2025-09-20 11:01:47 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:47.471358 | orchestrator | 2025-09-20 11:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:50.520930 | orchestrator | 2025-09-20 11:01:50 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:50.521038 | orchestrator | 2025-09-20 11:01:50 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:50.521053 | orchestrator | 2025-09-20 11:01:50 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:50.521064 | orchestrator | 2025-09-20 11:01:50 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:50.521102 | orchestrator | 2025-09-20 11:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:53.543950 | orchestrator | 2025-09-20 11:01:53 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:53.546364 | orchestrator | 2025-09-20 11:01:53 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:53.548436 | orchestrator | 2025-09-20 11:01:53 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:53.550167 | orchestrator | 2025-09-20 11:01:53 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:53.550215 | orchestrator | 2025-09-20 11:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:56.587245 | orchestrator | 2025-09-20 11:01:56 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:56.588275 | orchestrator | 2025-09-20 11:01:56 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:56.588308 | orchestrator | 2025-09-20 11:01:56 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:56.588321 | orchestrator | 2025-09-20 11:01:56 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:56.588361 | orchestrator | 2025-09-20 11:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:01:59.611898 | orchestrator | 2025-09-20 11:01:59 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:01:59.612002 | orchestrator | 2025-09-20 11:01:59 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:01:59.612252 | orchestrator | 2025-09-20 11:01:59 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:01:59.613447 | orchestrator | 2025-09-20 11:01:59 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:01:59.613495 | orchestrator | 2025-09-20 11:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:02.714803 | orchestrator | 2025-09-20 11:02:02 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:02.716344 | orchestrator | 2025-09-20 11:02:02 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:02.716902 | orchestrator | 2025-09-20 11:02:02 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:02.717880 | orchestrator | 2025-09-20 11:02:02 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:02.717906 | orchestrator | 2025-09-20 11:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:05.754815 | orchestrator | 2025-09-20 11:02:05 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:05.756274 | orchestrator | 2025-09-20 11:02:05 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:05.757757 | orchestrator | 2025-09-20 11:02:05 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:05.759264 | orchestrator | 2025-09-20 11:02:05 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:05.759302 | orchestrator | 2025-09-20 11:02:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:08.785192 | orchestrator | 2025-09-20 11:02:08 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:08.785301 | orchestrator | 2025-09-20 11:02:08 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:08.785779 | orchestrator | 2025-09-20 11:02:08 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:08.786425 | orchestrator | 2025-09-20 11:02:08 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:08.786456 | orchestrator | 2025-09-20 11:02:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:11.815396 | orchestrator | 2025-09-20 11:02:11 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:11.815849 | orchestrator | 2025-09-20 11:02:11 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:11.816966 | orchestrator | 2025-09-20 11:02:11 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:11.818278 | orchestrator | 2025-09-20 11:02:11 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:11.818306 | orchestrator | 2025-09-20 11:02:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:14.851055 | orchestrator | 2025-09-20 11:02:14 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:14.851796 | orchestrator | 2025-09-20 11:02:14 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:14.853943 | orchestrator | 2025-09-20 11:02:14 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:14.856612 | orchestrator | 2025-09-20 11:02:14 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:14.856649 | orchestrator | 2025-09-20 11:02:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:17.891046 | orchestrator | 2025-09-20 11:02:17 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:17.891774 | orchestrator | 2025-09-20 11:02:17 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:17.893247 | orchestrator | 2025-09-20 11:02:17 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:17.894449 | orchestrator | 2025-09-20 11:02:17 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:17.894560 | orchestrator | 2025-09-20 11:02:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:20.944881 | orchestrator | 2025-09-20 11:02:20 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:20.947177 | orchestrator | 2025-09-20 11:02:20 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:20.948986 | orchestrator | 2025-09-20 11:02:20 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:20.950582 | orchestrator | 2025-09-20 11:02:20 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:20.950622 | orchestrator | 2025-09-20 11:02:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:23.995886 | orchestrator | 2025-09-20 11:02:23 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:24.002221 | orchestrator | 2025-09-20 11:02:23 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:24.003292 | orchestrator | 2025-09-20 11:02:24 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state STARTED 2025-09-20 11:02:24.004461 | orchestrator | 2025-09-20 11:02:24 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:24.004490 | orchestrator | 2025-09-20 11:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:27.057844 | orchestrator | 2025-09-20 11:02:27 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:27.058753 | orchestrator | 2025-09-20 11:02:27 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:27.060880 | orchestrator | 2025-09-20 11:02:27 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:27.062661 | orchestrator | 2025-09-20 11:02:27 | INFO  | Task 5b3b37e8-5494-45bb-aada-5f9c9716fefc is in state SUCCESS 2025-09-20 11:02:27.066563 | orchestrator | 2025-09-20 11:02:27.066620 | orchestrator | 2025-09-20 11:02:27.066639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:02:27.066657 | orchestrator | 2025-09-20 11:02:27.066674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:02:27.066691 | orchestrator | Saturday 20 September 2025 10:59:33 +0000 (0:00:00.312) 0:00:00.312 **** 2025-09-20 11:02:27.066708 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:02:27.066724 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:02:27.066741 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:02:27.066759 | orchestrator | 2025-09-20 11:02:27.066776 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:02:27.066793 | orchestrator | Saturday 20 September 2025 10:59:33 +0000 (0:00:00.382) 0:00:00.695 **** 2025-09-20 11:02:27.066809 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-20 11:02:27.066855 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-20 11:02:27.066873 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-20 11:02:27.066887 | orchestrator | 2025-09-20 11:02:27.066897 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-20 11:02:27.066907 | orchestrator | 2025-09-20 11:02:27.066917 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 11:02:27.066927 | orchestrator | Saturday 20 September 2025 10:59:34 +0000 (0:00:00.562) 0:00:01.258 **** 2025-09-20 11:02:27.066937 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:02:27.066947 | orchestrator | 2025-09-20 11:02:27.066958 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-20 11:02:27.066968 | orchestrator | Saturday 20 September 2025 10:59:35 +0000 (0:00:01.018) 0:00:02.276 **** 2025-09-20 11:02:27.066977 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-20 11:02:27.066987 | orchestrator | 2025-09-20 11:02:27.066997 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-20 11:02:27.067007 | orchestrator | Saturday 20 September 2025 10:59:39 +0000 (0:00:04.178) 0:00:06.455 **** 2025-09-20 11:02:27.067017 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-20 11:02:27.067027 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-20 11:02:27.067037 | orchestrator | 2025-09-20 11:02:27.067047 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-20 11:02:27.067056 | orchestrator | Saturday 20 September 2025 10:59:44 +0000 (0:00:05.717) 0:00:12.173 **** 2025-09-20 11:02:27.067066 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-20 11:02:27.067099 | orchestrator | 2025-09-20 11:02:27.067110 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-20 11:02:27.067119 | orchestrator | Saturday 20 September 2025 10:59:47 +0000 (0:00:02.705) 0:00:14.878 **** 2025-09-20 11:02:27.067130 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:02:27.067140 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-20 11:02:27.067150 | orchestrator | 2025-09-20 11:02:27.067159 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-20 11:02:27.067172 | orchestrator | Saturday 20 September 2025 10:59:51 +0000 (0:00:03.577) 0:00:18.456 **** 2025-09-20 11:02:27.067183 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:02:27.067195 | orchestrator | 2025-09-20 11:02:27.067206 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-20 11:02:27.067217 | orchestrator | Saturday 20 September 2025 10:59:53 +0000 (0:00:02.658) 0:00:21.114 **** 2025-09-20 11:02:27.067228 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-20 11:02:27.067240 | orchestrator | 2025-09-20 11:02:27.067251 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-20 11:02:27.067262 | orchestrator | Saturday 20 September 2025 10:59:58 +0000 (0:00:04.617) 0:00:25.732 **** 2025-09-20 11:02:27.067309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.067335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.067353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.067372 | orchestrator | 2025-09-20 11:02:27.067382 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 11:02:27.067392 | orchestrator | Saturday 20 September 2025 11:00:04 +0000 (0:00:05.894) 0:00:31.627 **** 2025-09-20 11:02:27.067402 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:02:27.067412 | orchestrator | 2025-09-20 11:02:27.067428 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-20 11:02:27.067438 | orchestrator | Saturday 20 September 2025 11:00:04 +0000 (0:00:00.554) 0:00:32.181 **** 2025-09-20 11:02:27.067448 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.067458 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:27.067467 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:27.067477 | orchestrator | 2025-09-20 11:02:27.067487 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-20 11:02:27.067497 | orchestrator | Saturday 20 September 2025 11:00:08 +0000 (0:00:03.798) 0:00:35.979 **** 2025-09-20 11:02:27.067506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:02:27.067517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:02:27.067527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:02:27.067536 | orchestrator | 2025-09-20 11:02:27.067546 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-20 11:02:27.067556 | orchestrator | Saturday 20 September 2025 11:00:10 +0000 (0:00:01.431) 0:00:37.411 **** 2025-09-20 11:02:27.067566 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:02:27.067575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:02:27.067585 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:02:27.067595 | orchestrator | 2025-09-20 11:02:27.067604 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-20 11:02:27.067614 | orchestrator | Saturday 20 September 2025 11:00:11 +0000 (0:00:01.098) 0:00:38.510 **** 2025-09-20 11:02:27.067624 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:02:27.067634 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:02:27.067643 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:02:27.067653 | orchestrator | 2025-09-20 11:02:27.067663 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-20 11:02:27.067673 | orchestrator | Saturday 20 September 2025 11:00:11 +0000 (0:00:00.563) 0:00:39.074 **** 2025-09-20 11:02:27.067682 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.067692 | orchestrator | 2025-09-20 11:02:27.067702 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-20 11:02:27.067712 | orchestrator | Saturday 20 September 2025 11:00:12 +0000 (0:00:00.268) 0:00:39.342 **** 2025-09-20 11:02:27.067721 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.067731 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.067741 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.067750 | orchestrator | 2025-09-20 11:02:27.067760 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 11:02:27.067770 | orchestrator | Saturday 20 September 2025 11:00:12 +0000 (0:00:00.284) 0:00:39.626 **** 2025-09-20 11:02:27.067780 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:02:27.067796 | orchestrator | 2025-09-20 11:02:27.067806 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-20 11:02:27.067816 | orchestrator | Saturday 20 September 2025 11:00:12 +0000 (0:00:00.550) 0:00:40.177 **** 2025-09-20 11:02:27.067837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.067849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.067865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.067882 | orchestrator | 2025-09-20 11:02:27.067892 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-20 11:02:27.067902 | orchestrator | Saturday 20 September 2025 11:00:18 +0000 (0:00:05.914) 0:00:46.091 **** 2025-09-20 11:02:27.067919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 11:02:27.067931 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.067942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 11:02:27.067958 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 11:02:27.068159 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068169 | orchestrator | 2025-09-20 11:02:27.068179 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-20 11:02:27.068189 | orchestrator | Saturday 20 September 2025 11:00:25 +0000 (0:00:06.362) 0:00:52.453 **** 2025-09-20 11:02:27.068200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 11:02:27.068220 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 11:02:27.068255 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 11:02:27.068289 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068299 | orchestrator | 2025-09-20 11:02:27.068309 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-20 11:02:27.068319 | orchestrator | Saturday 20 September 2025 11:00:28 +0000 (0:00:03.532) 0:00:55.986 **** 2025-09-20 11:02:27.068329 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068338 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068348 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068358 | orchestrator | 2025-09-20 11:02:27.068368 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-20 11:02:27.068378 | orchestrator | Saturday 20 September 2025 11:00:32 +0000 (0:00:03.746) 0:00:59.732 **** 2025-09-20 11:02:27.068398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.068410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.068432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.068443 | orchestrator | 2025-09-20 11:02:27.068452 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-20 11:02:27.068462 | orchestrator | Saturday 20 September 2025 11:00:37 +0000 (0:00:04.776) 0:01:04.508 **** 2025-09-20 11:02:27.068472 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:27.068482 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.068491 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:27.068501 | orchestrator | 2025-09-20 11:02:27.068510 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-20 11:02:27.068520 | orchestrator | Saturday 20 September 2025 11:00:46 +0000 (0:00:09.042) 0:01:13.550 **** 2025-09-20 11:02:27.068530 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068540 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068550 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068559 | orchestrator | 2025-09-20 11:02:27.068569 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-20 11:02:27.068585 | orchestrator | Saturday 20 September 2025 11:00:52 +0000 (0:00:05.806) 0:01:19.357 **** 2025-09-20 11:02:27.068595 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068604 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068614 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068624 | orchestrator | 2025-09-20 11:02:27.068633 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-20 11:02:27.068643 | orchestrator | Saturday 20 September 2025 11:00:59 +0000 (0:00:07.240) 0:01:26.597 **** 2025-09-20 11:02:27.068653 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068663 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068673 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068682 | orchestrator | 2025-09-20 11:02:27.068692 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-20 11:02:27.068708 | orchestrator | Saturday 20 September 2025 11:01:03 +0000 (0:00:04.509) 0:01:31.106 **** 2025-09-20 11:02:27.068718 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068727 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068737 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068747 | orchestrator | 2025-09-20 11:02:27.068757 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-20 11:02:27.068767 | orchestrator | Saturday 20 September 2025 11:01:08 +0000 (0:00:04.684) 0:01:35.791 **** 2025-09-20 11:02:27.068779 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068790 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068801 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068812 | orchestrator | 2025-09-20 11:02:27.068822 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-20 11:02:27.068833 | orchestrator | Saturday 20 September 2025 11:01:08 +0000 (0:00:00.264) 0:01:36.055 **** 2025-09-20 11:02:27.068845 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-20 11:02:27.068856 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.068867 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-20 11:02:27.068878 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.068889 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-20 11:02:27.068900 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.068912 | orchestrator | 2025-09-20 11:02:27.068923 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-20 11:02:27.068934 | orchestrator | Saturday 20 September 2025 11:01:12 +0000 (0:00:03.428) 0:01:39.484 **** 2025-09-20 11:02:27.068951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.068972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.068993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 11:02:27.069006 | orchestrator | 2025-09-20 11:02:27.069018 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 11:02:27.069029 | orchestrator | Saturday 20 September 2025 11:01:17 +0000 (0:00:05.137) 0:01:44.621 **** 2025-09-20 11:02:27.069040 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:27.069051 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:27.069062 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:27.069106 | orchestrator | 2025-09-20 11:02:27.069118 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-20 11:02:27.069130 | orchestrator | Saturday 20 September 2025 11:01:17 +0000 (0:00:00.271) 0:01:44.893 **** 2025-09-20 11:02:27.069141 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.069152 | orchestrator | 2025-09-20 11:02:27.069162 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-20 11:02:27.069177 | orchestrator | Saturday 20 September 2025 11:01:19 +0000 (0:00:01.847) 0:01:46.740 **** 2025-09-20 11:02:27.069187 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.069205 | orchestrator | 2025-09-20 11:02:27.069215 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-20 11:02:27.069225 | orchestrator | Saturday 20 September 2025 11:01:21 +0000 (0:00:02.013) 0:01:48.754 **** 2025-09-20 11:02:27.069234 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.069244 | orchestrator | 2025-09-20 11:02:27.069254 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-20 11:02:27.069263 | orchestrator | Saturday 20 September 2025 11:01:23 +0000 (0:00:01.889) 0:01:50.643 **** 2025-09-20 11:02:27.069273 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.069282 | orchestrator | 2025-09-20 11:02:27.069292 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-20 11:02:27.069302 | orchestrator | Saturday 20 September 2025 11:01:47 +0000 (0:00:24.032) 0:02:14.676 **** 2025-09-20 11:02:27.069312 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.069321 | orchestrator | 2025-09-20 11:02:27.069337 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-20 11:02:27.069347 | orchestrator | Saturday 20 September 2025 11:01:49 +0000 (0:00:01.989) 0:02:16.665 **** 2025-09-20 11:02:27.069356 | orchestrator | 2025-09-20 11:02:27.069366 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-20 11:02:27.069376 | orchestrator | Saturday 20 September 2025 11:01:49 +0000 (0:00:00.148) 0:02:16.813 **** 2025-09-20 11:02:27.069385 | orchestrator | 2025-09-20 11:02:27.069395 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-20 11:02:27.069405 | orchestrator | Saturday 20 September 2025 11:01:49 +0000 (0:00:00.164) 0:02:16.978 **** 2025-09-20 11:02:27.069414 | orchestrator | 2025-09-20 11:02:27.069424 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-20 11:02:27.069434 | orchestrator | Saturday 20 September 2025 11:01:49 +0000 (0:00:00.136) 0:02:17.115 **** 2025-09-20 11:02:27.069443 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:27.069453 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:27.069462 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:27.069472 | orchestrator | 2025-09-20 11:02:27.069482 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:02:27.069492 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 11:02:27.069503 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 11:02:27.069513 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 11:02:27.069523 | orchestrator | 2025-09-20 11:02:27.069532 | orchestrator | 2025-09-20 11:02:27.069542 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:02:27.069552 | orchestrator | Saturday 20 September 2025 11:02:25 +0000 (0:00:35.370) 0:02:52.485 **** 2025-09-20 11:02:27.069562 | orchestrator | =============================================================================== 2025-09-20 11:02:27.069572 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.37s 2025-09-20 11:02:27.069581 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.03s 2025-09-20 11:02:27.069591 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.04s 2025-09-20 11:02:27.069600 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.24s 2025-09-20 11:02:27.069610 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.36s 2025-09-20 11:02:27.069620 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.91s 2025-09-20 11:02:27.069629 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.89s 2025-09-20 11:02:27.069639 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.81s 2025-09-20 11:02:27.069654 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.72s 2025-09-20 11:02:27.069664 | orchestrator | glance : Check glance containers ---------------------------------------- 5.14s 2025-09-20 11:02:27.069674 | orchestrator | glance : Copying over config.json files for services -------------------- 4.78s 2025-09-20 11:02:27.069683 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.68s 2025-09-20 11:02:27.069693 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.62s 2025-09-20 11:02:27.069703 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.51s 2025-09-20 11:02:27.069712 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.18s 2025-09-20 11:02:27.069722 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.80s 2025-09-20 11:02:27.069732 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.75s 2025-09-20 11:02:27.069741 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.58s 2025-09-20 11:02:27.069751 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.53s 2025-09-20 11:02:27.069761 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.43s 2025-09-20 11:02:27.069770 | orchestrator | 2025-09-20 11:02:27 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:27.069780 | orchestrator | 2025-09-20 11:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:30.110346 | orchestrator | 2025-09-20 11:02:30 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:30.112759 | orchestrator | 2025-09-20 11:02:30 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:30.115371 | orchestrator | 2025-09-20 11:02:30 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:30.116994 | orchestrator | 2025-09-20 11:02:30 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:30.117390 | orchestrator | 2025-09-20 11:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:33.149834 | orchestrator | 2025-09-20 11:02:33 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:33.151286 | orchestrator | 2025-09-20 11:02:33 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:33.153272 | orchestrator | 2025-09-20 11:02:33 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:33.154614 | orchestrator | 2025-09-20 11:02:33 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:33.156897 | orchestrator | 2025-09-20 11:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:36.196932 | orchestrator | 2025-09-20 11:02:36 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:36.198738 | orchestrator | 2025-09-20 11:02:36 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:36.200496 | orchestrator | 2025-09-20 11:02:36 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:36.202358 | orchestrator | 2025-09-20 11:02:36 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:36.202440 | orchestrator | 2025-09-20 11:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:39.252519 | orchestrator | 2025-09-20 11:02:39 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:39.252896 | orchestrator | 2025-09-20 11:02:39 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:39.253736 | orchestrator | 2025-09-20 11:02:39 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:39.255353 | orchestrator | 2025-09-20 11:02:39 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:39.255376 | orchestrator | 2025-09-20 11:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:42.301052 | orchestrator | 2025-09-20 11:02:42 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:42.301591 | orchestrator | 2025-09-20 11:02:42 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:42.302337 | orchestrator | 2025-09-20 11:02:42 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:42.303206 | orchestrator | 2025-09-20 11:02:42 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:42.303265 | orchestrator | 2025-09-20 11:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:45.355840 | orchestrator | 2025-09-20 11:02:45 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:45.357224 | orchestrator | 2025-09-20 11:02:45 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:45.360544 | orchestrator | 2025-09-20 11:02:45 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:45.362764 | orchestrator | 2025-09-20 11:02:45 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:45.362815 | orchestrator | 2025-09-20 11:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:48.404055 | orchestrator | 2025-09-20 11:02:48 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:48.406768 | orchestrator | 2025-09-20 11:02:48 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:48.408330 | orchestrator | 2025-09-20 11:02:48 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:48.409866 | orchestrator | 2025-09-20 11:02:48 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state STARTED 2025-09-20 11:02:48.410042 | orchestrator | 2025-09-20 11:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:51.462989 | orchestrator | 2025-09-20 11:02:51 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:51.464020 | orchestrator | 2025-09-20 11:02:51 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:51.465782 | orchestrator | 2025-09-20 11:02:51 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:51.470216 | orchestrator | 2025-09-20 11:02:51 | INFO  | Task 244d30fd-bf25-4f47-84b1-94f552fa7f20 is in state SUCCESS 2025-09-20 11:02:51.471639 | orchestrator | 2025-09-20 11:02:51.471761 | orchestrator | 2025-09-20 11:02:51.471776 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:02:51.471787 | orchestrator | 2025-09-20 11:02:51.471834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:02:51.471847 | orchestrator | Saturday 20 September 2025 10:59:25 +0000 (0:00:00.294) 0:00:00.294 **** 2025-09-20 11:02:51.471857 | orchestrator | ok: [testbed-manager] 2025-09-20 11:02:51.472224 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:02:51.472245 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:02:51.472255 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:02:51.472265 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:02:51.472274 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:02:51.472284 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:02:51.472338 | orchestrator | 2025-09-20 11:02:51.472350 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:02:51.472385 | orchestrator | Saturday 20 September 2025 10:59:26 +0000 (0:00:00.916) 0:00:01.210 **** 2025-09-20 11:02:51.472396 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472406 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472416 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472426 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472436 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472445 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472455 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-20 11:02:51.472465 | orchestrator | 2025-09-20 11:02:51.472474 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-20 11:02:51.472484 | orchestrator | 2025-09-20 11:02:51.472494 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-20 11:02:51.472504 | orchestrator | Saturday 20 September 2025 10:59:27 +0000 (0:00:00.817) 0:00:02.027 **** 2025-09-20 11:02:51.472515 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:02:51.472526 | orchestrator | 2025-09-20 11:02:51.472584 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-20 11:02:51.472595 | orchestrator | Saturday 20 September 2025 10:59:28 +0000 (0:00:01.655) 0:00:03.683 **** 2025-09-20 11:02:51.472608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472633 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 11:02:51.472652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.472695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.472706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472716 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.472737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.472747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.472871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.472885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.472897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.472909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.472922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.472933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.472945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.472962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.472987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473001 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 11:02:51.473015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473111 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473135 | orchestrator | 2025-09-20 11:02:51.473147 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-20 11:02:51.473191 | orchestrator | Saturday 20 September 2025 10:59:32 +0000 (0:00:03.543) 0:00:07.226 **** 2025-09-20 11:02:51.473203 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:02:51.473214 | orchestrator | 2025-09-20 11:02:51.473224 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-20 11:02:51.473234 | orchestrator | Saturday 20 September 2025 10:59:34 +0000 (0:00:01.694) 0:00:08.920 **** 2025-09-20 11:02:51.473244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473315 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 11:02:51.473326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473470 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.473481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473624 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.473681 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 11:02:51.473698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473742 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.473752 | orchestrator | 2025-09-20 11:02:51.473761 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-20 11:02:51.473771 | orchestrator | Saturday 20 September 2025 10:59:40 +0000 (0:00:06.110) 0:00:15.031 **** 2025-09-20 11:02:51.473781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 11:02:51.473792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.473832 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.473843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.473858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.473874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.473885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.473895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.473905 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 11:02:51.473922 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.473932 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.473943 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.473953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.473968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.473986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.473996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474101 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.474113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474177 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.474187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474214 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.474223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474254 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.474264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474308 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.474318 | orchestrator | 2025-09-20 11:02:51.474328 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-20 11:02:51.474338 | orchestrator | Saturday 20 September 2025 10:59:41 +0000 (0:00:01.509) 0:00:16.541 **** 2025-09-20 11:02:51.474348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 11:02:51.474368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 11:02:51.474404 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474479 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.474489 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.474499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474610 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.474620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 11:02:51.474630 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.474645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.474691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.474718 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.475230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.475341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.475370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.475393 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.475415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 11:02:51.475446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.475458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 11:02:51.475491 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.475504 | orchestrator | 2025-09-20 11:02:51.475516 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-20 11:02:51.475528 | orchestrator | Saturday 20 September 2025 10:59:43 +0000 (0:00:02.002) 0:00:18.543 **** 2025-09-20 11:02:51.475540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 11:02:51.475585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475646 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.475689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475836 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 11:02:51.475851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.475927 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.475983 | orchestrator | 2025-09-20 11:02:51.475995 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-20 11:02:51.476006 | orchestrator | Saturday 20 September 2025 10:59:49 +0000 (0:00:05.436) 0:00:23.980 **** 2025-09-20 11:02:51.476017 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 11:02:51.476028 | orchestrator | 2025-09-20 11:02:51.476039 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-20 11:02:51.476050 | orchestrator | Saturday 20 September 2025 10:59:50 +0000 (0:00:01.070) 0:00:25.050 **** 2025-09-20 11:02:51.476062 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476113 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476128 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476139 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476151 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476171 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476188 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476200 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476217 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476229 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476240 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476273 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476289 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102295, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.476306 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476334 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476353 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476372 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476413 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476432 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476457 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476469 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476481 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476499 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476511 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476529 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102319, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0835166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.476557 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476568 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476580 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476597 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476609 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476627 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476638 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476675 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476698 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476710 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476728 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476740 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476758 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476770 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476786 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476798 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476809 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102287, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0732918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.476827 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476846 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476857 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476869 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476899 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476919 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476949 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.476981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477021 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477048 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477068 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477116 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477147 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477180 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102309, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0780616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.477200 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477219 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477246 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477268 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477287 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477330 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477352 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477371 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477391 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477425 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477445 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477489 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477501 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477512 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477523 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477540 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477552 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477564 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477676 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477691 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477703 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477742 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477754 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477779 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102278, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.070293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.477791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477831 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477842 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477860 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477879 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477890 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477913 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477929 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477941 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.477953 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477970 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.477988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478011 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.478055 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478070 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478242 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.478272 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478313 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478325 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102297, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0759532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478373 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478384 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.478396 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478418 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.478435 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478453 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478465 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 11:02:51.478476 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.478494 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102306, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0778477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478506 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102300, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.076716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478517 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102290, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0755055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478529 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102318, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.082905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478545 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102274, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0693476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478568 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102331, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0869932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478580 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102314, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0822265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478597 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102281, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478609 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102275, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0696716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478621 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102304, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0775988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478632 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102302, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0769842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478654 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102329, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0864117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 11:02:51.478666 | orchestrator | 2025-09-20 11:02:51.478678 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-20 11:02:51.478690 | orchestrator | Saturday 20 September 2025 11:00:18 +0000 (0:00:27.766) 0:00:52.816 **** 2025-09-20 11:02:51.478701 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 11:02:51.478712 | orchestrator | 2025-09-20 11:02:51.478723 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-20 11:02:51.478734 | orchestrator | Saturday 20 September 2025 11:00:18 +0000 (0:00:00.830) 0:00:53.647 **** 2025-09-20 11:02:51.478746 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.478756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478767 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.478776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478786 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.478796 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 11:02:51.478806 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.478815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478825 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.478835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478845 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.478855 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:02:51.478865 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.478875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478884 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.478894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478909 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.478919 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.478929 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478939 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.478949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478959 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.478969 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.478978 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.478988 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.478998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.479007 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.479017 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.479027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.479036 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.479046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.479061 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.479071 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.479102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.479112 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-20 11:02:51.479122 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 11:02:51.479131 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-20 11:02:51.479141 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 11:02:51.479151 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 11:02:51.479161 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 11:02:51.479171 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 11:02:51.479181 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 11:02:51.479190 | orchestrator | 2025-09-20 11:02:51.479200 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-20 11:02:51.479210 | orchestrator | Saturday 20 September 2025 11:00:22 +0000 (0:00:03.863) 0:00:57.511 **** 2025-09-20 11:02:51.479219 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 11:02:51.479230 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.479240 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 11:02:51.479249 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.479259 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 11:02:51.479269 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.479279 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 11:02:51.479288 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.479298 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 11:02:51.479308 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.479318 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 11:02:51.479332 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.479342 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-20 11:02:51.479352 | orchestrator | 2025-09-20 11:02:51.479362 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-20 11:02:51.479371 | orchestrator | Saturday 20 September 2025 11:00:43 +0000 (0:00:20.267) 0:01:17.778 **** 2025-09-20 11:02:51.479381 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 11:02:51.479391 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.479401 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 11:02:51.479410 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.479420 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 11:02:51.479430 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.479440 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 11:02:51.479449 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.479459 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 11:02:51.479469 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.479479 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 11:02:51.479488 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.479498 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-20 11:02:51.479514 | orchestrator | 2025-09-20 11:02:51.479524 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-20 11:02:51.479534 | orchestrator | Saturday 20 September 2025 11:00:46 +0000 (0:00:03.916) 0:01:21.694 **** 2025-09-20 11:02:51.479544 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 11:02:51.479559 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 11:02:51.479570 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 11:02:51.479580 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.479590 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.479600 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.479610 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-20 11:02:51.479619 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 11:02:51.479630 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.479640 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 11:02:51.479649 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.479659 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 11:02:51.479669 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.479679 | orchestrator | 2025-09-20 11:02:51.479689 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-20 11:02:51.479699 | orchestrator | Saturday 20 September 2025 11:00:50 +0000 (0:00:03.075) 0:01:24.769 **** 2025-09-20 11:02:51.479709 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 11:02:51.479718 | orchestrator | 2025-09-20 11:02:51.479728 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-20 11:02:51.479738 | orchestrator | Saturday 20 September 2025 11:00:51 +0000 (0:00:01.284) 0:01:26.054 **** 2025-09-20 11:02:51.479747 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.479757 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.479767 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.479777 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.479786 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.479796 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.479806 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.479815 | orchestrator | 2025-09-20 11:02:51.479825 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-20 11:02:51.479835 | orchestrator | Saturday 20 September 2025 11:00:52 +0000 (0:00:01.096) 0:01:27.150 **** 2025-09-20 11:02:51.479845 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.479854 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.479864 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.479874 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.479884 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:51.479893 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:51.479903 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:51.479912 | orchestrator | 2025-09-20 11:02:51.479922 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-20 11:02:51.479932 | orchestrator | Saturday 20 September 2025 11:00:56 +0000 (0:00:03.856) 0:01:31.007 **** 2025-09-20 11:02:51.479942 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.479952 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.479972 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.479982 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.479992 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.480002 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.480012 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.480021 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.480031 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.480041 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.480051 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.480061 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.480070 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 11:02:51.480126 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.480137 | orchestrator | 2025-09-20 11:02:51.480146 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-20 11:02:51.480156 | orchestrator | Saturday 20 September 2025 11:00:59 +0000 (0:00:03.101) 0:01:34.109 **** 2025-09-20 11:02:51.480166 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 11:02:51.480176 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.480185 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 11:02:51.480195 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.480205 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 11:02:51.480215 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.480224 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 11:02:51.480234 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.480250 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 11:02:51.480260 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.480270 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-20 11:02:51.480280 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 11:02:51.480290 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.480299 | orchestrator | 2025-09-20 11:02:51.480309 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-20 11:02:51.480319 | orchestrator | Saturday 20 September 2025 11:01:01 +0000 (0:00:01.951) 0:01:36.060 **** 2025-09-20 11:02:51.480329 | orchestrator | [WARNING]: Skipped 2025-09-20 11:02:51.480338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-20 11:02:51.480348 | orchestrator | due to this access issue: 2025-09-20 11:02:51.480358 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-20 11:02:51.480367 | orchestrator | not a directory 2025-09-20 11:02:51.480377 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 11:02:51.480387 | orchestrator | 2025-09-20 11:02:51.480396 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-20 11:02:51.480406 | orchestrator | Saturday 20 September 2025 11:01:02 +0000 (0:00:01.414) 0:01:37.475 **** 2025-09-20 11:02:51.480416 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.480425 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.480435 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.480450 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.480460 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.480470 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.480480 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.480489 | orchestrator | 2025-09-20 11:02:51.480500 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-20 11:02:51.480509 | orchestrator | Saturday 20 September 2025 11:01:03 +0000 (0:00:00.766) 0:01:38.241 **** 2025-09-20 11:02:51.480519 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.480529 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:02:51.480538 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:02:51.480548 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:02:51.480557 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:02:51.480567 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:02:51.480576 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:02:51.480586 | orchestrator | 2025-09-20 11:02:51.480596 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-20 11:02:51.480606 | orchestrator | Saturday 20 September 2025 11:01:04 +0000 (0:00:00.604) 0:01:38.845 **** 2025-09-20 11:02:51.480616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480633 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 11:02:51.480644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.480675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480701 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.480732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.480790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 11:02:51.480820 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.480839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.480869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.480888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.480906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.480930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.480948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.480978 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 11:02:51.481011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.481032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.481051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.481071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.481121 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.481133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.481145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.481163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.481183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 11:02:51.481195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 11:02:51.481206 | orchestrator | 2025-09-20 11:02:51.481218 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-20 11:02:51.481230 | orchestrator | Saturday 20 September 2025 11:01:09 +0000 (0:00:05.411) 0:01:44.257 **** 2025-09-20 11:02:51.481241 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-20 11:02:51.481252 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:02:51.481263 | orchestrator | 2025-09-20 11:02:51.481274 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481285 | orchestrator | Saturday 20 September 2025 11:01:10 +0000 (0:00:01.283) 0:01:45.541 **** 2025-09-20 11:02:51.481296 | orchestrator | 2025-09-20 11:02:51.481307 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481318 | orchestrator | Saturday 20 September 2025 11:01:10 +0000 (0:00:00.064) 0:01:45.606 **** 2025-09-20 11:02:51.481329 | orchestrator | 2025-09-20 11:02:51.481339 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481350 | orchestrator | Saturday 20 September 2025 11:01:10 +0000 (0:00:00.068) 0:01:45.674 **** 2025-09-20 11:02:51.481361 | orchestrator | 2025-09-20 11:02:51.481373 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481384 | orchestrator | Saturday 20 September 2025 11:01:10 +0000 (0:00:00.065) 0:01:45.739 **** 2025-09-20 11:02:51.481394 | orchestrator | 2025-09-20 11:02:51.481405 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481416 | orchestrator | Saturday 20 September 2025 11:01:11 +0000 (0:00:00.209) 0:01:45.949 **** 2025-09-20 11:02:51.481427 | orchestrator | 2025-09-20 11:02:51.481438 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481449 | orchestrator | Saturday 20 September 2025 11:01:11 +0000 (0:00:00.064) 0:01:46.013 **** 2025-09-20 11:02:51.481460 | orchestrator | 2025-09-20 11:02:51.481471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 11:02:51.481488 | orchestrator | Saturday 20 September 2025 11:01:11 +0000 (0:00:00.067) 0:01:46.081 **** 2025-09-20 11:02:51.481501 | orchestrator | 2025-09-20 11:02:51.481512 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-20 11:02:51.481524 | orchestrator | Saturday 20 September 2025 11:01:11 +0000 (0:00:00.087) 0:01:46.169 **** 2025-09-20 11:02:51.481535 | orchestrator | changed: [testbed-manager] 2025-09-20 11:02:51.481547 | orchestrator | 2025-09-20 11:02:51.481558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-20 11:02:51.481569 | orchestrator | Saturday 20 September 2025 11:01:31 +0000 (0:00:20.131) 0:02:06.300 **** 2025-09-20 11:02:51.481581 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:02:51.481599 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:02:51.481610 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:51.481621 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:02:51.481633 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:51.481644 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:51.481654 | orchestrator | changed: [testbed-manager] 2025-09-20 11:02:51.481665 | orchestrator | 2025-09-20 11:02:51.481676 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-20 11:02:51.481688 | orchestrator | Saturday 20 September 2025 11:01:44 +0000 (0:00:12.598) 0:02:18.899 **** 2025-09-20 11:02:51.481699 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:51.481710 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:51.481721 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:51.481732 | orchestrator | 2025-09-20 11:02:51.481744 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-20 11:02:51.481755 | orchestrator | Saturday 20 September 2025 11:01:49 +0000 (0:00:04.874) 0:02:23.773 **** 2025-09-20 11:02:51.481766 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:51.481777 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:51.481788 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:51.481799 | orchestrator | 2025-09-20 11:02:51.481810 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-20 11:02:51.481821 | orchestrator | Saturday 20 September 2025 11:01:54 +0000 (0:00:05.437) 0:02:29.211 **** 2025-09-20 11:02:51.481832 | orchestrator | changed: [testbed-manager] 2025-09-20 11:02:51.481844 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:51.481855 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:51.481868 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:51.481887 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:02:51.481899 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:02:51.481910 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:02:51.481922 | orchestrator | 2025-09-20 11:02:51.481933 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-20 11:02:51.481944 | orchestrator | Saturday 20 September 2025 11:02:10 +0000 (0:00:16.244) 0:02:45.456 **** 2025-09-20 11:02:51.481956 | orchestrator | changed: [testbed-manager] 2025-09-20 11:02:51.481968 | orchestrator | 2025-09-20 11:02:51.481980 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-20 11:02:51.481991 | orchestrator | Saturday 20 September 2025 11:02:27 +0000 (0:00:16.528) 0:03:01.984 **** 2025-09-20 11:02:51.482003 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:02:51.482048 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:02:51.482063 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:02:51.482090 | orchestrator | 2025-09-20 11:02:51.482102 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-20 11:02:51.482113 | orchestrator | Saturday 20 September 2025 11:02:36 +0000 (0:00:09.601) 0:03:11.586 **** 2025-09-20 11:02:51.482124 | orchestrator | changed: [testbed-manager] 2025-09-20 11:02:51.482135 | orchestrator | 2025-09-20 11:02:51.482146 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-20 11:02:51.482158 | orchestrator | Saturday 20 September 2025 11:02:41 +0000 (0:00:04.796) 0:03:16.382 **** 2025-09-20 11:02:51.482169 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:02:51.482180 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:02:51.482191 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:02:51.482202 | orchestrator | 2025-09-20 11:02:51.482214 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:02:51.482225 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 11:02:51.482239 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 11:02:51.482259 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 11:02:51.482272 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 11:02:51.482284 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 11:02:51.482296 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 11:02:51.482307 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 11:02:51.482318 | orchestrator | 2025-09-20 11:02:51.482330 | orchestrator | 2025-09-20 11:02:51.482341 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:02:51.482352 | orchestrator | Saturday 20 September 2025 11:02:48 +0000 (0:00:06.498) 0:03:22.880 **** 2025-09-20 11:02:51.482363 | orchestrator | =============================================================================== 2025-09-20 11:02:51.482380 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.77s 2025-09-20 11:02:51.482392 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.27s 2025-09-20 11:02:51.482403 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.13s 2025-09-20 11:02:51.482413 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 16.53s 2025-09-20 11:02:51.482424 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.24s 2025-09-20 11:02:51.482435 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.60s 2025-09-20 11:02:51.482446 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.60s 2025-09-20 11:02:51.482457 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.50s 2025-09-20 11:02:51.482467 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.11s 2025-09-20 11:02:51.482478 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.44s 2025-09-20 11:02:51.482489 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.44s 2025-09-20 11:02:51.482500 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.41s 2025-09-20 11:02:51.482511 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.87s 2025-09-20 11:02:51.482522 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.80s 2025-09-20 11:02:51.482533 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.92s 2025-09-20 11:02:51.482544 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.86s 2025-09-20 11:02:51.482555 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.86s 2025-09-20 11:02:51.482566 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.54s 2025-09-20 11:02:51.482577 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.10s 2025-09-20 11:02:51.482588 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.08s 2025-09-20 11:02:51.482607 | orchestrator | 2025-09-20 11:02:51 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:02:51.482618 | orchestrator | 2025-09-20 11:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:54.526921 | orchestrator | 2025-09-20 11:02:54 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:54.528605 | orchestrator | 2025-09-20 11:02:54 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:54.530840 | orchestrator | 2025-09-20 11:02:54 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:54.532621 | orchestrator | 2025-09-20 11:02:54 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:02:54.532655 | orchestrator | 2025-09-20 11:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:02:57.572949 | orchestrator | 2025-09-20 11:02:57 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:02:57.574938 | orchestrator | 2025-09-20 11:02:57 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:02:57.576285 | orchestrator | 2025-09-20 11:02:57 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:02:57.577597 | orchestrator | 2025-09-20 11:02:57 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:02:57.577832 | orchestrator | 2025-09-20 11:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:00.642146 | orchestrator | 2025-09-20 11:03:00 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:00.643120 | orchestrator | 2025-09-20 11:03:00 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:00.644987 | orchestrator | 2025-09-20 11:03:00 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:00.646603 | orchestrator | 2025-09-20 11:03:00 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:00.647211 | orchestrator | 2025-09-20 11:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:03.682689 | orchestrator | 2025-09-20 11:03:03 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:03.683224 | orchestrator | 2025-09-20 11:03:03 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:03.685136 | orchestrator | 2025-09-20 11:03:03 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:03.685840 | orchestrator | 2025-09-20 11:03:03 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:03.685966 | orchestrator | 2025-09-20 11:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:06.722946 | orchestrator | 2025-09-20 11:03:06 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:06.724376 | orchestrator | 2025-09-20 11:03:06 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:06.725913 | orchestrator | 2025-09-20 11:03:06 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:06.727524 | orchestrator | 2025-09-20 11:03:06 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:06.727565 | orchestrator | 2025-09-20 11:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:09.895397 | orchestrator | 2025-09-20 11:03:09 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:09.895759 | orchestrator | 2025-09-20 11:03:09 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:09.896938 | orchestrator | 2025-09-20 11:03:09 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:09.897539 | orchestrator | 2025-09-20 11:03:09 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:09.897568 | orchestrator | 2025-09-20 11:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:12.938288 | orchestrator | 2025-09-20 11:03:12 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:12.942792 | orchestrator | 2025-09-20 11:03:12 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:12.945916 | orchestrator | 2025-09-20 11:03:12 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:12.948396 | orchestrator | 2025-09-20 11:03:12 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:12.949603 | orchestrator | 2025-09-20 11:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:16.001798 | orchestrator | 2025-09-20 11:03:15 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:16.006392 | orchestrator | 2025-09-20 11:03:16 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:16.008276 | orchestrator | 2025-09-20 11:03:16 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:16.010176 | orchestrator | 2025-09-20 11:03:16 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:16.010622 | orchestrator | 2025-09-20 11:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:19.047605 | orchestrator | 2025-09-20 11:03:19 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:19.047700 | orchestrator | 2025-09-20 11:03:19 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:19.047713 | orchestrator | 2025-09-20 11:03:19 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:19.048597 | orchestrator | 2025-09-20 11:03:19 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:19.048630 | orchestrator | 2025-09-20 11:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:22.086968 | orchestrator | 2025-09-20 11:03:22 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:22.087198 | orchestrator | 2025-09-20 11:03:22 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:22.087798 | orchestrator | 2025-09-20 11:03:22 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:22.088431 | orchestrator | 2025-09-20 11:03:22 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:22.088514 | orchestrator | 2025-09-20 11:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:25.115382 | orchestrator | 2025-09-20 11:03:25 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:25.116758 | orchestrator | 2025-09-20 11:03:25 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:25.117567 | orchestrator | 2025-09-20 11:03:25 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:25.118324 | orchestrator | 2025-09-20 11:03:25 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:25.118347 | orchestrator | 2025-09-20 11:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:28.144628 | orchestrator | 2025-09-20 11:03:28 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:28.150581 | orchestrator | 2025-09-20 11:03:28 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:28.150601 | orchestrator | 2025-09-20 11:03:28 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:28.150608 | orchestrator | 2025-09-20 11:03:28 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:28.150628 | orchestrator | 2025-09-20 11:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:31.171148 | orchestrator | 2025-09-20 11:03:31 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:31.171234 | orchestrator | 2025-09-20 11:03:31 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:31.171614 | orchestrator | 2025-09-20 11:03:31 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:31.172027 | orchestrator | 2025-09-20 11:03:31 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:31.172123 | orchestrator | 2025-09-20 11:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:34.202788 | orchestrator | 2025-09-20 11:03:34 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:34.202895 | orchestrator | 2025-09-20 11:03:34 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:34.203417 | orchestrator | 2025-09-20 11:03:34 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state STARTED 2025-09-20 11:03:34.205396 | orchestrator | 2025-09-20 11:03:34 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:34.205436 | orchestrator | 2025-09-20 11:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:37.231004 | orchestrator | 2025-09-20 11:03:37 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:37.231195 | orchestrator | 2025-09-20 11:03:37 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:37.232667 | orchestrator | 2025-09-20 11:03:37 | INFO  | Task 75815cca-51fc-495d-9211-fdd0b490cf34 is in state SUCCESS 2025-09-20 11:03:37.235520 | orchestrator | 2025-09-20 11:03:37.235571 | orchestrator | 2025-09-20 11:03:37.235849 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:03:37.235864 | orchestrator | 2025-09-20 11:03:37.235876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:03:37.235888 | orchestrator | Saturday 20 September 2025 10:59:46 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-20 11:03:37.235900 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:03:37.235912 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:03:37.235923 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:03:37.235933 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:03:37.235944 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:03:37.235955 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:03:37.235966 | orchestrator | 2025-09-20 11:03:37.235977 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:03:37.235988 | orchestrator | Saturday 20 September 2025 10:59:47 +0000 (0:00:00.686) 0:00:00.949 **** 2025-09-20 11:03:37.235999 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-20 11:03:37.236010 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-20 11:03:37.236022 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-20 11:03:37.236033 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-20 11:03:37.236043 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-20 11:03:37.236054 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-20 11:03:37.236065 | orchestrator | 2025-09-20 11:03:37.236076 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-20 11:03:37.236123 | orchestrator | 2025-09-20 11:03:37.236135 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 11:03:37.236146 | orchestrator | Saturday 20 September 2025 10:59:48 +0000 (0:00:00.582) 0:00:01.531 **** 2025-09-20 11:03:37.236157 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:03:37.236197 | orchestrator | 2025-09-20 11:03:37.236209 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-20 11:03:37.236220 | orchestrator | Saturday 20 September 2025 10:59:49 +0000 (0:00:01.089) 0:00:02.620 **** 2025-09-20 11:03:37.236232 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-20 11:03:37.236243 | orchestrator | 2025-09-20 11:03:37.236254 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-20 11:03:37.236265 | orchestrator | Saturday 20 September 2025 10:59:52 +0000 (0:00:02.896) 0:00:05.516 **** 2025-09-20 11:03:37.236276 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-20 11:03:37.236293 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-20 11:03:37.236312 | orchestrator | 2025-09-20 11:03:37.236360 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-20 11:03:37.236381 | orchestrator | Saturday 20 September 2025 10:59:58 +0000 (0:00:05.979) 0:00:11.496 **** 2025-09-20 11:03:37.236400 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:03:37.236419 | orchestrator | 2025-09-20 11:03:37.236438 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-20 11:03:37.236451 | orchestrator | Saturday 20 September 2025 11:00:00 +0000 (0:00:02.685) 0:00:14.182 **** 2025-09-20 11:03:37.236462 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:03:37.236476 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-20 11:03:37.236488 | orchestrator | 2025-09-20 11:03:37.236501 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-20 11:03:37.236513 | orchestrator | Saturday 20 September 2025 11:00:04 +0000 (0:00:03.472) 0:00:17.654 **** 2025-09-20 11:03:37.236527 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:03:37.236539 | orchestrator | 2025-09-20 11:03:37.236551 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-20 11:03:37.236564 | orchestrator | Saturday 20 September 2025 11:00:07 +0000 (0:00:02.990) 0:00:20.645 **** 2025-09-20 11:03:37.236576 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-20 11:03:37.236589 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-20 11:03:37.236602 | orchestrator | 2025-09-20 11:03:37.236614 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-20 11:03:37.236626 | orchestrator | Saturday 20 September 2025 11:00:14 +0000 (0:00:06.875) 0:00:27.520 **** 2025-09-20 11:03:37.236644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.236715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.236743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.236762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.236918 | orchestrator | 2025-09-20 11:03:37.236964 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 11:03:37.236978 | orchestrator | Saturday 20 September 2025 11:00:17 +0000 (0:00:03.448) 0:00:30.969 **** 2025-09-20 11:03:37.236989 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.237000 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.237011 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.237023 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.237034 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.237044 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.237055 | orchestrator | 2025-09-20 11:03:37.237066 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 11:03:37.237077 | orchestrator | Saturday 20 September 2025 11:00:18 +0000 (0:00:00.669) 0:00:31.639 **** 2025-09-20 11:03:37.237126 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.237137 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.237148 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.237159 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:03:37.237170 | orchestrator | 2025-09-20 11:03:37.237181 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-20 11:03:37.237192 | orchestrator | Saturday 20 September 2025 11:00:19 +0000 (0:00:01.596) 0:00:33.235 **** 2025-09-20 11:03:37.237203 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-20 11:03:37.237215 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-20 11:03:37.237226 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-20 11:03:37.237237 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-20 11:03:37.237248 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-20 11:03:37.237259 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-20 11:03:37.237269 | orchestrator | 2025-09-20 11:03:37.237281 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-20 11:03:37.237300 | orchestrator | Saturday 20 September 2025 11:00:22 +0000 (0:00:02.770) 0:00:36.006 **** 2025-09-20 11:03:37.237328 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 11:03:37.237351 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 11:03:37.237372 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 11:03:37.237452 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 11:03:37.237467 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 11:03:37.237485 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 11:03:37.237498 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 11:03:37.237511 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 11:03:37.237558 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 11:03:37.237572 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 11:03:37.237589 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 11:03:37.237601 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 11:03:37.237619 | orchestrator | 2025-09-20 11:03:37.237630 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-20 11:03:37.237641 | orchestrator | Saturday 20 September 2025 11:00:27 +0000 (0:00:04.426) 0:00:40.432 **** 2025-09-20 11:03:37.237653 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:03:37.237665 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:03:37.237676 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 11:03:37.237687 | orchestrator | 2025-09-20 11:03:37.237698 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-20 11:03:37.237709 | orchestrator | Saturday 20 September 2025 11:00:29 +0000 (0:00:02.054) 0:00:42.487 **** 2025-09-20 11:03:37.237720 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-20 11:03:37.237731 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-20 11:03:37.237742 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-20 11:03:37.237753 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 11:03:37.237764 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 11:03:37.237802 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 11:03:37.237815 | orchestrator | 2025-09-20 11:03:37.237826 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-20 11:03:37.237837 | orchestrator | Saturday 20 September 2025 11:00:31 +0000 (0:00:02.864) 0:00:45.352 **** 2025-09-20 11:03:37.237848 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-20 11:03:37.237860 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-20 11:03:37.237871 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-20 11:03:37.237882 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-20 11:03:37.237893 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-20 11:03:37.237904 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-20 11:03:37.237915 | orchestrator | 2025-09-20 11:03:37.237926 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-20 11:03:37.237937 | orchestrator | Saturday 20 September 2025 11:00:32 +0000 (0:00:00.958) 0:00:46.310 **** 2025-09-20 11:03:37.237948 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.237959 | orchestrator | 2025-09-20 11:03:37.237970 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-20 11:03:37.237981 | orchestrator | Saturday 20 September 2025 11:00:33 +0000 (0:00:00.131) 0:00:46.442 **** 2025-09-20 11:03:37.237992 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.238003 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.238014 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.238070 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.238143 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.238161 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.238177 | orchestrator | 2025-09-20 11:03:37.238195 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 11:03:37.238213 | orchestrator | Saturday 20 September 2025 11:00:33 +0000 (0:00:00.645) 0:00:47.088 **** 2025-09-20 11:03:37.238232 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:03:37.238252 | orchestrator | 2025-09-20 11:03:37.238271 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-20 11:03:37.238290 | orchestrator | Saturday 20 September 2025 11:00:34 +0000 (0:00:01.143) 0:00:48.231 **** 2025-09-20 11:03:37.238319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.238355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.238429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.238452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.238670 | orchestrator | 2025-09-20 11:03:37.238681 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-20 11:03:37.238692 | orchestrator | Saturday 20 September 2025 11:00:38 +0000 (0:00:03.268) 0:00:51.500 **** 2025-09-20 11:03:37.238704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.238722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.238746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238764 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.238775 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.238792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.238804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238815 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.238827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238859 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.238871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238905 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.238917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.238940 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.238951 | orchestrator | 2025-09-20 11:03:37.238962 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-20 11:03:37.238973 | orchestrator | Saturday 20 September 2025 11:00:40 +0000 (0:00:02.588) 0:00:54.088 **** 2025-09-20 11:03:37.238991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.239014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239026 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.239042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.239054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239066 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.239077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.239120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239131 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.239143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239173 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.239190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239213 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.239230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.239260 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.239271 | orchestrator | 2025-09-20 11:03:37.239283 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-20 11:03:37.239294 | orchestrator | Saturday 20 September 2025 11:00:43 +0000 (0:00:02.544) 0:00:56.633 **** 2025-09-20 11:03:37.239306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.239323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.239335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.239354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239484 | orchestrator | 2025-09-20 11:03:37.239500 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-20 11:03:37.239512 | orchestrator | Saturday 20 September 2025 11:00:46 +0000 (0:00:03.742) 0:01:00.375 **** 2025-09-20 11:03:37.239523 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-20 11:03:37.239535 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-20 11:03:37.239546 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.239558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-20 11:03:37.239569 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-20 11:03:37.239580 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.239591 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-20 11:03:37.239602 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-20 11:03:37.239613 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.239624 | orchestrator | 2025-09-20 11:03:37.239636 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-20 11:03:37.239647 | orchestrator | Saturday 20 September 2025 11:00:49 +0000 (0:00:02.881) 0:01:03.256 **** 2025-09-20 11:03:37.239658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.239683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.239695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.239712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.239841 | orchestrator | 2025-09-20 11:03:37.239852 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-20 11:03:37.239864 | orchestrator | Saturday 20 September 2025 11:01:00 +0000 (0:00:11.054) 0:01:14.311 **** 2025-09-20 11:03:37.239880 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.239892 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.239903 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.239919 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:03:37.239937 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:03:37.239955 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:03:37.239973 | orchestrator | 2025-09-20 11:03:37.239991 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-20 11:03:37.240008 | orchestrator | Saturday 20 September 2025 11:01:03 +0000 (0:00:02.766) 0:01:17.078 **** 2025-09-20 11:03:37.240027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.240046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240064 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.240241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.240290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240315 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.240342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 11:03:37.240354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240365 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.240377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240412 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.240423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240446 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.240465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 11:03:37.240489 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.240500 | orchestrator | 2025-09-20 11:03:37.240511 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-20 11:03:37.240523 | orchestrator | Saturday 20 September 2025 11:01:05 +0000 (0:00:01.467) 0:01:18.545 **** 2025-09-20 11:03:37.240534 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.240545 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.240555 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.240566 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.240577 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.240588 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.240605 | orchestrator | 2025-09-20 11:03:37.240617 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-20 11:03:37.240628 | orchestrator | Saturday 20 September 2025 11:01:06 +0000 (0:00:01.343) 0:01:19.889 **** 2025-09-20 11:03:37.240644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.240672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.240697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 11:03:37.240718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 11:03:37.240808 | orchestrator | 2025-09-20 11:03:37.240818 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 11:03:37.240828 | orchestrator | Saturday 20 September 2025 11:01:09 +0000 (0:00:02.758) 0:01:22.648 **** 2025-09-20 11:03:37.240838 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.240848 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:03:37.240858 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:03:37.240868 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:03:37.240877 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:03:37.240887 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:03:37.240896 | orchestrator | 2025-09-20 11:03:37.240906 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-20 11:03:37.240916 | orchestrator | Saturday 20 September 2025 11:01:09 +0000 (0:00:00.631) 0:01:23.279 **** 2025-09-20 11:03:37.240925 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:03:37.240935 | orchestrator | 2025-09-20 11:03:37.240945 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-20 11:03:37.240954 | orchestrator | Saturday 20 September 2025 11:01:12 +0000 (0:00:02.144) 0:01:25.423 **** 2025-09-20 11:03:37.240964 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:03:37.240974 | orchestrator | 2025-09-20 11:03:37.240983 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-20 11:03:37.240993 | orchestrator | Saturday 20 September 2025 11:01:14 +0000 (0:00:01.991) 0:01:27.415 **** 2025-09-20 11:03:37.241003 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:03:37.241012 | orchestrator | 2025-09-20 11:03:37.241022 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 11:03:37.241032 | orchestrator | Saturday 20 September 2025 11:01:31 +0000 (0:00:17.690) 0:01:45.105 **** 2025-09-20 11:03:37.241042 | orchestrator | 2025-09-20 11:03:37.241057 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 11:03:37.241067 | orchestrator | Saturday 20 September 2025 11:01:31 +0000 (0:00:00.155) 0:01:45.260 **** 2025-09-20 11:03:37.241076 | orchestrator | 2025-09-20 11:03:37.241118 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 11:03:37.241128 | orchestrator | Saturday 20 September 2025 11:01:32 +0000 (0:00:00.129) 0:01:45.390 **** 2025-09-20 11:03:37.241138 | orchestrator | 2025-09-20 11:03:37.241147 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 11:03:37.241157 | orchestrator | Saturday 20 September 2025 11:01:32 +0000 (0:00:00.163) 0:01:45.554 **** 2025-09-20 11:03:37.241178 | orchestrator | 2025-09-20 11:03:37.241187 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 11:03:37.241197 | orchestrator | Saturday 20 September 2025 11:01:32 +0000 (0:00:00.164) 0:01:45.719 **** 2025-09-20 11:03:37.241207 | orchestrator | 2025-09-20 11:03:37.241217 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 11:03:37.241226 | orchestrator | Saturday 20 September 2025 11:01:32 +0000 (0:00:00.259) 0:01:45.978 **** 2025-09-20 11:03:37.241236 | orchestrator | 2025-09-20 11:03:37.241245 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-20 11:03:37.241255 | orchestrator | Saturday 20 September 2025 11:01:32 +0000 (0:00:00.133) 0:01:46.112 **** 2025-09-20 11:03:37.241264 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:03:37.241274 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:03:37.241283 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:03:37.241293 | orchestrator | 2025-09-20 11:03:37.241303 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-20 11:03:37.241312 | orchestrator | Saturday 20 September 2025 11:01:59 +0000 (0:00:26.618) 0:02:12.730 **** 2025-09-20 11:03:37.241322 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:03:37.241331 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:03:37.241341 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:03:37.241350 | orchestrator | 2025-09-20 11:03:37.241360 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-20 11:03:37.241370 | orchestrator | Saturday 20 September 2025 11:02:08 +0000 (0:00:09.396) 0:02:22.127 **** 2025-09-20 11:03:37.241379 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:03:37.241389 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:03:37.241398 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:03:37.241407 | orchestrator | 2025-09-20 11:03:37.241417 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-20 11:03:37.241427 | orchestrator | Saturday 20 September 2025 11:03:23 +0000 (0:01:15.244) 0:03:37.372 **** 2025-09-20 11:03:37.241437 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:03:37.241446 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:03:37.241456 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:03:37.241465 | orchestrator | 2025-09-20 11:03:37.241475 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-20 11:03:37.241485 | orchestrator | Saturday 20 September 2025 11:03:35 +0000 (0:00:11.364) 0:03:48.737 **** 2025-09-20 11:03:37.241500 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:03:37.241510 | orchestrator | 2025-09-20 11:03:37.241519 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:03:37.241530 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 11:03:37.241541 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 11:03:37.241551 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 11:03:37.241560 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 11:03:37.241570 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 11:03:37.241580 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 11:03:37.241589 | orchestrator | 2025-09-20 11:03:37.241599 | orchestrator | 2025-09-20 11:03:37.241609 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:03:37.241618 | orchestrator | Saturday 20 September 2025 11:03:35 +0000 (0:00:00.573) 0:03:49.311 **** 2025-09-20 11:03:37.241634 | orchestrator | =============================================================================== 2025-09-20 11:03:37.241644 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 75.24s 2025-09-20 11:03:37.241653 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.62s 2025-09-20 11:03:37.241663 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.69s 2025-09-20 11:03:37.241673 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.36s 2025-09-20 11:03:37.241682 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.06s 2025-09-20 11:03:37.241692 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.40s 2025-09-20 11:03:37.241702 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.88s 2025-09-20 11:03:37.241711 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.98s 2025-09-20 11:03:37.241727 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.43s 2025-09-20 11:03:37.241737 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.74s 2025-09-20 11:03:37.241747 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.47s 2025-09-20 11:03:37.241756 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.45s 2025-09-20 11:03:37.241766 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.27s 2025-09-20 11:03:37.241775 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.99s 2025-09-20 11:03:37.241785 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.90s 2025-09-20 11:03:37.241794 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.88s 2025-09-20 11:03:37.241804 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.86s 2025-09-20 11:03:37.241813 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.77s 2025-09-20 11:03:37.241823 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.76s 2025-09-20 11:03:37.241832 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.76s 2025-09-20 11:03:37.241842 | orchestrator | 2025-09-20 11:03:37 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:37.241852 | orchestrator | 2025-09-20 11:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:40.269430 | orchestrator | 2025-09-20 11:03:40 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:40.276965 | orchestrator | 2025-09-20 11:03:40 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:40.279304 | orchestrator | 2025-09-20 11:03:40 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:40.284892 | orchestrator | 2025-09-20 11:03:40 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:40.284943 | orchestrator | 2025-09-20 11:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:43.320361 | orchestrator | 2025-09-20 11:03:43 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:43.320581 | orchestrator | 2025-09-20 11:03:43 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:43.321121 | orchestrator | 2025-09-20 11:03:43 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:43.321840 | orchestrator | 2025-09-20 11:03:43 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:43.321868 | orchestrator | 2025-09-20 11:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:46.345413 | orchestrator | 2025-09-20 11:03:46 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:46.345527 | orchestrator | 2025-09-20 11:03:46 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:46.346002 | orchestrator | 2025-09-20 11:03:46 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:46.346771 | orchestrator | 2025-09-20 11:03:46 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:46.346800 | orchestrator | 2025-09-20 11:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:49.386923 | orchestrator | 2025-09-20 11:03:49 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:49.387028 | orchestrator | 2025-09-20 11:03:49 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:49.387433 | orchestrator | 2025-09-20 11:03:49 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:49.389197 | orchestrator | 2025-09-20 11:03:49 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:49.389247 | orchestrator | 2025-09-20 11:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:52.426903 | orchestrator | 2025-09-20 11:03:52 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:52.427190 | orchestrator | 2025-09-20 11:03:52 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:52.427801 | orchestrator | 2025-09-20 11:03:52 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:52.428589 | orchestrator | 2025-09-20 11:03:52 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:52.428616 | orchestrator | 2025-09-20 11:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:55.455750 | orchestrator | 2025-09-20 11:03:55 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:55.455870 | orchestrator | 2025-09-20 11:03:55 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:55.456462 | orchestrator | 2025-09-20 11:03:55 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:55.457792 | orchestrator | 2025-09-20 11:03:55 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:55.457821 | orchestrator | 2025-09-20 11:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:03:58.486393 | orchestrator | 2025-09-20 11:03:58 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:03:58.486630 | orchestrator | 2025-09-20 11:03:58 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:03:58.487479 | orchestrator | 2025-09-20 11:03:58 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:03:58.488354 | orchestrator | 2025-09-20 11:03:58 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:03:58.488435 | orchestrator | 2025-09-20 11:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:01.537334 | orchestrator | 2025-09-20 11:04:01 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:01.542809 | orchestrator | 2025-09-20 11:04:01 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:01.542868 | orchestrator | 2025-09-20 11:04:01 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:01.542882 | orchestrator | 2025-09-20 11:04:01 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:01.542919 | orchestrator | 2025-09-20 11:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:04.577760 | orchestrator | 2025-09-20 11:04:04 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:04.577865 | orchestrator | 2025-09-20 11:04:04 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:04.578510 | orchestrator | 2025-09-20 11:04:04 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:04.579197 | orchestrator | 2025-09-20 11:04:04 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:04.579275 | orchestrator | 2025-09-20 11:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:07.616373 | orchestrator | 2025-09-20 11:04:07 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:07.616611 | orchestrator | 2025-09-20 11:04:07 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:07.617428 | orchestrator | 2025-09-20 11:04:07 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:07.618287 | orchestrator | 2025-09-20 11:04:07 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:07.618309 | orchestrator | 2025-09-20 11:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:10.641499 | orchestrator | 2025-09-20 11:04:10 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:10.642185 | orchestrator | 2025-09-20 11:04:10 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:10.642570 | orchestrator | 2025-09-20 11:04:10 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:10.643267 | orchestrator | 2025-09-20 11:04:10 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:10.643292 | orchestrator | 2025-09-20 11:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:13.673213 | orchestrator | 2025-09-20 11:04:13 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:13.673324 | orchestrator | 2025-09-20 11:04:13 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:13.673930 | orchestrator | 2025-09-20 11:04:13 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:13.674665 | orchestrator | 2025-09-20 11:04:13 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:13.674720 | orchestrator | 2025-09-20 11:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:16.702594 | orchestrator | 2025-09-20 11:04:16 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:16.702956 | orchestrator | 2025-09-20 11:04:16 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:16.703951 | orchestrator | 2025-09-20 11:04:16 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:16.704662 | orchestrator | 2025-09-20 11:04:16 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:16.704703 | orchestrator | 2025-09-20 11:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:19.753453 | orchestrator | 2025-09-20 11:04:19 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:19.754230 | orchestrator | 2025-09-20 11:04:19 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:19.755110 | orchestrator | 2025-09-20 11:04:19 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:19.756767 | orchestrator | 2025-09-20 11:04:19 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:19.757626 | orchestrator | 2025-09-20 11:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:22.776634 | orchestrator | 2025-09-20 11:04:22 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:22.776755 | orchestrator | 2025-09-20 11:04:22 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:22.777488 | orchestrator | 2025-09-20 11:04:22 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:22.778284 | orchestrator | 2025-09-20 11:04:22 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:22.778311 | orchestrator | 2025-09-20 11:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:25.803525 | orchestrator | 2025-09-20 11:04:25 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:25.804173 | orchestrator | 2025-09-20 11:04:25 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:25.804947 | orchestrator | 2025-09-20 11:04:25 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:25.806127 | orchestrator | 2025-09-20 11:04:25 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:25.806221 | orchestrator | 2025-09-20 11:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:28.835415 | orchestrator | 2025-09-20 11:04:28 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:28.835519 | orchestrator | 2025-09-20 11:04:28 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:28.835961 | orchestrator | 2025-09-20 11:04:28 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:28.836717 | orchestrator | 2025-09-20 11:04:28 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:28.836748 | orchestrator | 2025-09-20 11:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:31.882011 | orchestrator | 2025-09-20 11:04:31 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:31.882541 | orchestrator | 2025-09-20 11:04:31 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:31.883470 | orchestrator | 2025-09-20 11:04:31 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:31.884607 | orchestrator | 2025-09-20 11:04:31 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:31.884631 | orchestrator | 2025-09-20 11:04:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:34.916997 | orchestrator | 2025-09-20 11:04:34 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:34.917490 | orchestrator | 2025-09-20 11:04:34 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:34.918333 | orchestrator | 2025-09-20 11:04:34 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:34.919074 | orchestrator | 2025-09-20 11:04:34 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:34.919108 | orchestrator | 2025-09-20 11:04:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:37.940818 | orchestrator | 2025-09-20 11:04:37 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:37.941190 | orchestrator | 2025-09-20 11:04:37 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:37.941641 | orchestrator | 2025-09-20 11:04:37 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:37.942447 | orchestrator | 2025-09-20 11:04:37 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:37.944609 | orchestrator | 2025-09-20 11:04:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:40.964690 | orchestrator | 2025-09-20 11:04:40 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:40.964814 | orchestrator | 2025-09-20 11:04:40 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:40.965291 | orchestrator | 2025-09-20 11:04:40 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:40.965901 | orchestrator | 2025-09-20 11:04:40 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state STARTED 2025-09-20 11:04:40.965928 | orchestrator | 2025-09-20 11:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:43.986898 | orchestrator | 2025-09-20 11:04:43 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:43.987716 | orchestrator | 2025-09-20 11:04:43 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:43.989386 | orchestrator | 2025-09-20 11:04:43 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:43.989921 | orchestrator | 2025-09-20 11:04:43 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:04:43.991876 | orchestrator | 2025-09-20 11:04:43.993365 | orchestrator | 2025-09-20 11:04:43 | INFO  | Task 1d1f2c7e-6dec-4ee4-8f59-b891774057fe is in state SUCCESS 2025-09-20 11:04:43.993408 | orchestrator | 2025-09-20 11:04:43.993423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:04:43.993441 | orchestrator | 2025-09-20 11:04:43.993461 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:04:43.993481 | orchestrator | Saturday 20 September 2025 11:02:52 +0000 (0:00:00.271) 0:00:00.271 **** 2025-09-20 11:04:43.993500 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:04:43.993522 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:04:43.993540 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:04:43.993557 | orchestrator | 2025-09-20 11:04:43.993570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:04:43.993580 | orchestrator | Saturday 20 September 2025 11:02:52 +0000 (0:00:00.295) 0:00:00.567 **** 2025-09-20 11:04:43.993606 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-20 11:04:43.993617 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-20 11:04:43.993627 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-20 11:04:43.993637 | orchestrator | 2025-09-20 11:04:43.993646 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-20 11:04:43.993656 | orchestrator | 2025-09-20 11:04:43.993671 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-20 11:04:43.993686 | orchestrator | Saturday 20 September 2025 11:02:53 +0000 (0:00:00.439) 0:00:01.006 **** 2025-09-20 11:04:43.993702 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:04:43.993719 | orchestrator | 2025-09-20 11:04:43.993734 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-20 11:04:43.993752 | orchestrator | Saturday 20 September 2025 11:02:53 +0000 (0:00:00.601) 0:00:01.607 **** 2025-09-20 11:04:43.993770 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-20 11:04:43.993806 | orchestrator | 2025-09-20 11:04:43.993817 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-20 11:04:43.993826 | orchestrator | Saturday 20 September 2025 11:02:57 +0000 (0:00:03.095) 0:00:04.703 **** 2025-09-20 11:04:43.993836 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-20 11:04:43.993846 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-20 11:04:43.993855 | orchestrator | 2025-09-20 11:04:43.993865 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-20 11:04:43.993875 | orchestrator | Saturday 20 September 2025 11:03:03 +0000 (0:00:06.162) 0:00:10.866 **** 2025-09-20 11:04:43.993884 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:04:43.993894 | orchestrator | 2025-09-20 11:04:43.993904 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-20 11:04:43.993914 | orchestrator | Saturday 20 September 2025 11:03:06 +0000 (0:00:03.266) 0:00:14.133 **** 2025-09-20 11:04:43.993924 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:04:43.993934 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-20 11:04:43.993943 | orchestrator | 2025-09-20 11:04:43.993953 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-20 11:04:43.993962 | orchestrator | Saturday 20 September 2025 11:03:10 +0000 (0:00:03.691) 0:00:17.824 **** 2025-09-20 11:04:43.993972 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:04:43.993982 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-20 11:04:43.993992 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-20 11:04:43.994002 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-20 11:04:43.994012 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-20 11:04:43.994079 | orchestrator | 2025-09-20 11:04:43.994124 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-20 11:04:43.994142 | orchestrator | Saturday 20 September 2025 11:03:24 +0000 (0:00:14.170) 0:00:31.995 **** 2025-09-20 11:04:43.994156 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-20 11:04:43.994166 | orchestrator | 2025-09-20 11:04:43.994175 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-20 11:04:43.994185 | orchestrator | Saturday 20 September 2025 11:03:27 +0000 (0:00:03.525) 0:00:35.520 **** 2025-09-20 11:04:43.994198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.994233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.994254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.994266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994349 | orchestrator | 2025-09-20 11:04:43.994359 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-20 11:04:43.994369 | orchestrator | Saturday 20 September 2025 11:03:29 +0000 (0:00:01.669) 0:00:37.190 **** 2025-09-20 11:04:43.994378 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-20 11:04:43.994388 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-20 11:04:43.994397 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-20 11:04:43.994407 | orchestrator | 2025-09-20 11:04:43.994416 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-20 11:04:43.994426 | orchestrator | Saturday 20 September 2025 11:03:30 +0000 (0:00:01.139) 0:00:38.329 **** 2025-09-20 11:04:43.994435 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.994445 | orchestrator | 2025-09-20 11:04:43.994455 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-20 11:04:43.994464 | orchestrator | Saturday 20 September 2025 11:03:30 +0000 (0:00:00.117) 0:00:38.447 **** 2025-09-20 11:04:43.994474 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.994484 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:04:43.994493 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:04:43.994503 | orchestrator | 2025-09-20 11:04:43.994512 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-20 11:04:43.994522 | orchestrator | Saturday 20 September 2025 11:03:31 +0000 (0:00:00.460) 0:00:38.907 **** 2025-09-20 11:04:43.994531 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:04:43.994541 | orchestrator | 2025-09-20 11:04:43.994551 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-20 11:04:43.994560 | orchestrator | Saturday 20 September 2025 11:03:31 +0000 (0:00:00.568) 0:00:39.475 **** 2025-09-20 11:04:43.994570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.994594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.994609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.994620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.994706 | orchestrator | 2025-09-20 11:04:43.994716 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-20 11:04:43.994726 | orchestrator | Saturday 20 September 2025 11:03:35 +0000 (0:00:03.860) 0:00:43.336 **** 2025-09-20 11:04:43.994736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.994746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994773 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.994789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.994804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994824 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:04:43.994835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.994845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994871 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:04:43.994881 | orchestrator | 2025-09-20 11:04:43.994890 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-20 11:04:43.994900 | orchestrator | Saturday 20 September 2025 11:03:36 +0000 (0:00:00.837) 0:00:44.173 **** 2025-09-20 11:04:43.994920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.994931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994951 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.994961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.994977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.994997 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:04:43.995017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.995028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995048 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:04:43.995058 | orchestrator | 2025-09-20 11:04:43.995068 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-20 11:04:43.995077 | orchestrator | Saturday 20 September 2025 11:03:37 +0000 (0:00:01.435) 0:00:45.609 **** 2025-09-20 11:04:43.995113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995232 | orchestrator | 2025-09-20 11:04:43.995242 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-20 11:04:43.995252 | orchestrator | Saturday 20 September 2025 11:03:41 +0000 (0:00:03.327) 0:00:48.936 **** 2025-09-20 11:04:43.995261 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.995271 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:04:43.995281 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:04:43.995291 | orchestrator | 2025-09-20 11:04:43.995300 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-20 11:04:43.995310 | orchestrator | Saturday 20 September 2025 11:03:43 +0000 (0:00:02.376) 0:00:51.313 **** 2025-09-20 11:04:43.995320 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:04:43.995329 | orchestrator | 2025-09-20 11:04:43.995339 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-20 11:04:43.995348 | orchestrator | Saturday 20 September 2025 11:03:44 +0000 (0:00:01.055) 0:00:52.368 **** 2025-09-20 11:04:43.995358 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.995367 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:04:43.995377 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:04:43.995386 | orchestrator | 2025-09-20 11:04:43.995396 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-20 11:04:43.995406 | orchestrator | Saturday 20 September 2025 11:03:45 +0000 (0:00:00.511) 0:00:52.880 **** 2025-09-20 11:04:43.995422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995529 | orchestrator | 2025-09-20 11:04:43.995539 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-20 11:04:43.995549 | orchestrator | Saturday 20 September 2025 11:03:54 +0000 (0:00:09.091) 0:01:01.975 **** 2025-09-20 11:04:43.995572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.995583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995610 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.995621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.995631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995658 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:04:43.995672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 11:04:43.995689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:04:43.995708 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:04:43.995719 | orchestrator | 2025-09-20 11:04:43.995728 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-20 11:04:43.995738 | orchestrator | Saturday 20 September 2025 11:03:55 +0000 (0:00:01.409) 0:01:03.384 **** 2025-09-20 11:04:43.995748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 11:04:43.995794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:04:43.995872 | orchestrator | 2025-09-20 11:04:43.995882 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-20 11:04:43.995892 | orchestrator | Saturday 20 September 2025 11:03:58 +0000 (0:00:02.851) 0:01:06.236 **** 2025-09-20 11:04:43.995902 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:04:43.995912 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:04:43.995921 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:04:43.995931 | orchestrator | 2025-09-20 11:04:43.995941 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-20 11:04:43.995950 | orchestrator | Saturday 20 September 2025 11:03:58 +0000 (0:00:00.331) 0:01:06.567 **** 2025-09-20 11:04:43.995960 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.995969 | orchestrator | 2025-09-20 11:04:43.995979 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-20 11:04:43.995989 | orchestrator | Saturday 20 September 2025 11:04:00 +0000 (0:00:01.902) 0:01:08.469 **** 2025-09-20 11:04:43.995998 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.996008 | orchestrator | 2025-09-20 11:04:43.996017 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-20 11:04:43.996027 | orchestrator | Saturday 20 September 2025 11:04:02 +0000 (0:00:01.804) 0:01:10.274 **** 2025-09-20 11:04:43.996037 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.996046 | orchestrator | 2025-09-20 11:04:43.996056 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-20 11:04:43.996065 | orchestrator | Saturday 20 September 2025 11:04:12 +0000 (0:00:09.459) 0:01:19.733 **** 2025-09-20 11:04:43.996075 | orchestrator | 2025-09-20 11:04:43.996084 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-20 11:04:43.996110 | orchestrator | Saturday 20 September 2025 11:04:12 +0000 (0:00:00.153) 0:01:19.886 **** 2025-09-20 11:04:43.996119 | orchestrator | 2025-09-20 11:04:43.996129 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-20 11:04:43.996139 | orchestrator | Saturday 20 September 2025 11:04:12 +0000 (0:00:00.110) 0:01:19.997 **** 2025-09-20 11:04:43.996148 | orchestrator | 2025-09-20 11:04:43.996158 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-20 11:04:43.996167 | orchestrator | Saturday 20 September 2025 11:04:12 +0000 (0:00:00.110) 0:01:20.107 **** 2025-09-20 11:04:43.996177 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:04:43.996187 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:04:43.996196 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.996206 | orchestrator | 2025-09-20 11:04:43.996216 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-20 11:04:43.996225 | orchestrator | Saturday 20 September 2025 11:04:21 +0000 (0:00:09.434) 0:01:29.542 **** 2025-09-20 11:04:43.996235 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.996244 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:04:43.996254 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:04:43.996264 | orchestrator | 2025-09-20 11:04:43.996273 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-20 11:04:43.996283 | orchestrator | Saturday 20 September 2025 11:04:29 +0000 (0:00:07.195) 0:01:36.737 **** 2025-09-20 11:04:43.996293 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:04:43.996302 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:04:43.996312 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:04:43.996321 | orchestrator | 2025-09-20 11:04:43.996331 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:04:43.996348 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 11:04:43.996359 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 11:04:43.996368 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 11:04:43.996378 | orchestrator | 2025-09-20 11:04:43.996388 | orchestrator | 2025-09-20 11:04:43.996398 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:04:43.996407 | orchestrator | Saturday 20 September 2025 11:04:39 +0000 (0:00:10.710) 0:01:47.448 **** 2025-09-20 11:04:43.996417 | orchestrator | =============================================================================== 2025-09-20 11:04:43.996427 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.17s 2025-09-20 11:04:43.996442 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.71s 2025-09-20 11:04:43.996452 | orchestrator | barbican : Running barbican bootstrap container ------------------------- 9.46s 2025-09-20 11:04:43.996462 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.43s 2025-09-20 11:04:43.996472 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.09s 2025-09-20 11:04:43.996481 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.20s 2025-09-20 11:04:43.996491 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.16s 2025-09-20 11:04:43.996500 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.86s 2025-09-20 11:04:43.996514 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.69s 2025-09-20 11:04:43.996524 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.53s 2025-09-20 11:04:43.996533 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.33s 2025-09-20 11:04:43.996543 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.27s 2025-09-20 11:04:43.996552 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.10s 2025-09-20 11:04:43.996562 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.85s 2025-09-20 11:04:43.996572 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.38s 2025-09-20 11:04:43.996581 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.90s 2025-09-20 11:04:43.996591 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.80s 2025-09-20 11:04:43.996600 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.67s 2025-09-20 11:04:43.996610 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.44s 2025-09-20 11:04:43.996620 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.41s 2025-09-20 11:04:43.996629 | orchestrator | 2025-09-20 11:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:47.014246 | orchestrator | 2025-09-20 11:04:47 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:47.014660 | orchestrator | 2025-09-20 11:04:47 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:47.015340 | orchestrator | 2025-09-20 11:04:47 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:47.016142 | orchestrator | 2025-09-20 11:04:47 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:04:47.016165 | orchestrator | 2025-09-20 11:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:50.044261 | orchestrator | 2025-09-20 11:04:50 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:50.044609 | orchestrator | 2025-09-20 11:04:50 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:50.045332 | orchestrator | 2025-09-20 11:04:50 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:50.046148 | orchestrator | 2025-09-20 11:04:50 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:04:50.046192 | orchestrator | 2025-09-20 11:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:53.066658 | orchestrator | 2025-09-20 11:04:53 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:53.067068 | orchestrator | 2025-09-20 11:04:53 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:53.068577 | orchestrator | 2025-09-20 11:04:53 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:53.070108 | orchestrator | 2025-09-20 11:04:53 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:04:53.070135 | orchestrator | 2025-09-20 11:04:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:56.117953 | orchestrator | 2025-09-20 11:04:56 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:56.119043 | orchestrator | 2025-09-20 11:04:56 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:56.120827 | orchestrator | 2025-09-20 11:04:56 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:56.121567 | orchestrator | 2025-09-20 11:04:56 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:04:56.122216 | orchestrator | 2025-09-20 11:04:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:04:59.161833 | orchestrator | 2025-09-20 11:04:59 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:04:59.161964 | orchestrator | 2025-09-20 11:04:59 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:04:59.163196 | orchestrator | 2025-09-20 11:04:59 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:04:59.164949 | orchestrator | 2025-09-20 11:04:59 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:04:59.165014 | orchestrator | 2025-09-20 11:04:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:02.200432 | orchestrator | 2025-09-20 11:05:02 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:02.202308 | orchestrator | 2025-09-20 11:05:02 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:02.203569 | orchestrator | 2025-09-20 11:05:02 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:02.204678 | orchestrator | 2025-09-20 11:05:02 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:02.204709 | orchestrator | 2025-09-20 11:05:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:05.240705 | orchestrator | 2025-09-20 11:05:05 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:05.242370 | orchestrator | 2025-09-20 11:05:05 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:05.244144 | orchestrator | 2025-09-20 11:05:05 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:05.246152 | orchestrator | 2025-09-20 11:05:05 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:05.246222 | orchestrator | 2025-09-20 11:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:08.281807 | orchestrator | 2025-09-20 11:05:08 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:08.283884 | orchestrator | 2025-09-20 11:05:08 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:08.287652 | orchestrator | 2025-09-20 11:05:08 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:08.292631 | orchestrator | 2025-09-20 11:05:08 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:08.292723 | orchestrator | 2025-09-20 11:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:11.322531 | orchestrator | 2025-09-20 11:05:11 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:11.322637 | orchestrator | 2025-09-20 11:05:11 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:11.323841 | orchestrator | 2025-09-20 11:05:11 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:11.324062 | orchestrator | 2025-09-20 11:05:11 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:11.324153 | orchestrator | 2025-09-20 11:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:14.365561 | orchestrator | 2025-09-20 11:05:14 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:14.365701 | orchestrator | 2025-09-20 11:05:14 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:14.366958 | orchestrator | 2025-09-20 11:05:14 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:14.367651 | orchestrator | 2025-09-20 11:05:14 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:14.367681 | orchestrator | 2025-09-20 11:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:17.392418 | orchestrator | 2025-09-20 11:05:17 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:17.393016 | orchestrator | 2025-09-20 11:05:17 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:17.394294 | orchestrator | 2025-09-20 11:05:17 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:17.396023 | orchestrator | 2025-09-20 11:05:17 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:17.396466 | orchestrator | 2025-09-20 11:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:20.433253 | orchestrator | 2025-09-20 11:05:20 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:20.433716 | orchestrator | 2025-09-20 11:05:20 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:20.435323 | orchestrator | 2025-09-20 11:05:20 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:20.436603 | orchestrator | 2025-09-20 11:05:20 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:20.436764 | orchestrator | 2025-09-20 11:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:23.472788 | orchestrator | 2025-09-20 11:05:23 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:23.472999 | orchestrator | 2025-09-20 11:05:23 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:23.474347 | orchestrator | 2025-09-20 11:05:23 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:23.474973 | orchestrator | 2025-09-20 11:05:23 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:23.475009 | orchestrator | 2025-09-20 11:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:26.521039 | orchestrator | 2025-09-20 11:05:26 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:26.523303 | orchestrator | 2025-09-20 11:05:26 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:26.524888 | orchestrator | 2025-09-20 11:05:26 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:26.526656 | orchestrator | 2025-09-20 11:05:26 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state STARTED 2025-09-20 11:05:26.526682 | orchestrator | 2025-09-20 11:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:29.575821 | orchestrator | 2025-09-20 11:05:29 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:29.575930 | orchestrator | 2025-09-20 11:05:29 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:29.575979 | orchestrator | 2025-09-20 11:05:29 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:29.576001 | orchestrator | 2025-09-20 11:05:29 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:29.576019 | orchestrator | 2025-09-20 11:05:29 | INFO  | Task 2c165d35-e12f-4737-afd5-4b519c19e14e is in state SUCCESS 2025-09-20 11:05:29.576036 | orchestrator | 2025-09-20 11:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:32.594379 | orchestrator | 2025-09-20 11:05:32 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:32.594502 | orchestrator | 2025-09-20 11:05:32 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:32.594931 | orchestrator | 2025-09-20 11:05:32 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:32.596854 | orchestrator | 2025-09-20 11:05:32 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:32.596879 | orchestrator | 2025-09-20 11:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:35.622497 | orchestrator | 2025-09-20 11:05:35 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:35.624148 | orchestrator | 2025-09-20 11:05:35 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:35.624685 | orchestrator | 2025-09-20 11:05:35 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:35.625506 | orchestrator | 2025-09-20 11:05:35 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:35.625536 | orchestrator | 2025-09-20 11:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:38.648772 | orchestrator | 2025-09-20 11:05:38 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:38.648891 | orchestrator | 2025-09-20 11:05:38 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:38.649246 | orchestrator | 2025-09-20 11:05:38 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:38.649978 | orchestrator | 2025-09-20 11:05:38 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:38.650008 | orchestrator | 2025-09-20 11:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:41.679889 | orchestrator | 2025-09-20 11:05:41 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:41.680213 | orchestrator | 2025-09-20 11:05:41 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:41.680752 | orchestrator | 2025-09-20 11:05:41 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:41.681498 | orchestrator | 2025-09-20 11:05:41 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:41.681524 | orchestrator | 2025-09-20 11:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:44.710008 | orchestrator | 2025-09-20 11:05:44 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:44.712189 | orchestrator | 2025-09-20 11:05:44 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:44.714330 | orchestrator | 2025-09-20 11:05:44 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:44.716098 | orchestrator | 2025-09-20 11:05:44 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:44.716115 | orchestrator | 2025-09-20 11:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:47.755493 | orchestrator | 2025-09-20 11:05:47 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:47.755601 | orchestrator | 2025-09-20 11:05:47 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:47.756183 | orchestrator | 2025-09-20 11:05:47 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:47.756771 | orchestrator | 2025-09-20 11:05:47 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:47.756802 | orchestrator | 2025-09-20 11:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:50.792193 | orchestrator | 2025-09-20 11:05:50 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:50.796714 | orchestrator | 2025-09-20 11:05:50 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:50.800198 | orchestrator | 2025-09-20 11:05:50 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:50.803886 | orchestrator | 2025-09-20 11:05:50 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:50.805765 | orchestrator | 2025-09-20 11:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:53.849355 | orchestrator | 2025-09-20 11:05:53 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:53.850918 | orchestrator | 2025-09-20 11:05:53 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:53.852682 | orchestrator | 2025-09-20 11:05:53 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:53.856516 | orchestrator | 2025-09-20 11:05:53 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:53.856553 | orchestrator | 2025-09-20 11:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:56.903942 | orchestrator | 2025-09-20 11:05:56 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:56.905542 | orchestrator | 2025-09-20 11:05:56 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:56.907940 | orchestrator | 2025-09-20 11:05:56 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:56.910994 | orchestrator | 2025-09-20 11:05:56 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:56.911031 | orchestrator | 2025-09-20 11:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:05:59.945368 | orchestrator | 2025-09-20 11:05:59 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:05:59.945541 | orchestrator | 2025-09-20 11:05:59 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:05:59.947180 | orchestrator | 2025-09-20 11:05:59 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:05:59.947676 | orchestrator | 2025-09-20 11:05:59 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:05:59.947807 | orchestrator | 2025-09-20 11:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:02.981500 | orchestrator | 2025-09-20 11:06:02 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:02.984109 | orchestrator | 2025-09-20 11:06:02 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:02.984625 | orchestrator | 2025-09-20 11:06:02 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:02.985411 | orchestrator | 2025-09-20 11:06:02 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:02.985447 | orchestrator | 2025-09-20 11:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:06.010910 | orchestrator | 2025-09-20 11:06:06 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:06.013427 | orchestrator | 2025-09-20 11:06:06 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:06.015368 | orchestrator | 2025-09-20 11:06:06 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:06.016249 | orchestrator | 2025-09-20 11:06:06 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:06.016280 | orchestrator | 2025-09-20 11:06:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:09.141812 | orchestrator | 2025-09-20 11:06:09 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:09.142407 | orchestrator | 2025-09-20 11:06:09 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:09.142982 | orchestrator | 2025-09-20 11:06:09 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:09.143922 | orchestrator | 2025-09-20 11:06:09 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:09.143944 | orchestrator | 2025-09-20 11:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:12.390353 | orchestrator | 2025-09-20 11:06:12 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:12.391670 | orchestrator | 2025-09-20 11:06:12 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:12.391739 | orchestrator | 2025-09-20 11:06:12 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:12.392316 | orchestrator | 2025-09-20 11:06:12 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:12.392360 | orchestrator | 2025-09-20 11:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:15.435188 | orchestrator | 2025-09-20 11:06:15 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:15.436405 | orchestrator | 2025-09-20 11:06:15 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:15.438369 | orchestrator | 2025-09-20 11:06:15 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:15.439303 | orchestrator | 2025-09-20 11:06:15 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:15.440555 | orchestrator | 2025-09-20 11:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:18.479138 | orchestrator | 2025-09-20 11:06:18 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:18.480115 | orchestrator | 2025-09-20 11:06:18 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:18.482163 | orchestrator | 2025-09-20 11:06:18 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:18.485910 | orchestrator | 2025-09-20 11:06:18 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:18.485974 | orchestrator | 2025-09-20 11:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:21.527928 | orchestrator | 2025-09-20 11:06:21 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:21.529370 | orchestrator | 2025-09-20 11:06:21 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:21.529848 | orchestrator | 2025-09-20 11:06:21 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:21.532785 | orchestrator | 2025-09-20 11:06:21 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:21.532822 | orchestrator | 2025-09-20 11:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:24.577536 | orchestrator | 2025-09-20 11:06:24 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:24.578421 | orchestrator | 2025-09-20 11:06:24 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:24.579559 | orchestrator | 2025-09-20 11:06:24 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:24.582647 | orchestrator | 2025-09-20 11:06:24 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:24.582785 | orchestrator | 2025-09-20 11:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:27.618477 | orchestrator | 2025-09-20 11:06:27 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:27.618635 | orchestrator | 2025-09-20 11:06:27 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:27.621270 | orchestrator | 2025-09-20 11:06:27 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:27.622288 | orchestrator | 2025-09-20 11:06:27 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:27.622334 | orchestrator | 2025-09-20 11:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:30.678666 | orchestrator | 2025-09-20 11:06:30 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:30.680763 | orchestrator | 2025-09-20 11:06:30 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:30.682761 | orchestrator | 2025-09-20 11:06:30 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:30.686987 | orchestrator | 2025-09-20 11:06:30 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:30.687210 | orchestrator | 2025-09-20 11:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:33.735895 | orchestrator | 2025-09-20 11:06:33 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state STARTED 2025-09-20 11:06:33.737925 | orchestrator | 2025-09-20 11:06:33 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:33.739970 | orchestrator | 2025-09-20 11:06:33 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:33.741746 | orchestrator | 2025-09-20 11:06:33 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state STARTED 2025-09-20 11:06:33.742090 | orchestrator | 2025-09-20 11:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:36.795622 | orchestrator | 2025-09-20 11:06:36 | INFO  | Task eb457117-d0d1-4c67-bb53-eb84d1368ab4 is in state SUCCESS 2025-09-20 11:06:36.797929 | orchestrator | 2025-09-20 11:06:36.797975 | orchestrator | 2025-09-20 11:06:36.797981 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-20 11:06:36.797987 | orchestrator | 2025-09-20 11:06:36.797991 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-20 11:06:36.797995 | orchestrator | Saturday 20 September 2025 11:04:48 +0000 (0:00:00.390) 0:00:00.390 **** 2025-09-20 11:06:36.798000 | orchestrator | changed: [localhost] 2025-09-20 11:06:36.798005 | orchestrator | 2025-09-20 11:06:36.798009 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-20 11:06:36.798047 | orchestrator | Saturday 20 September 2025 11:04:49 +0000 (0:00:01.437) 0:00:01.828 **** 2025-09-20 11:06:36.798053 | orchestrator | changed: [localhost] 2025-09-20 11:06:36.798057 | orchestrator | 2025-09-20 11:06:36.798061 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-20 11:06:36.798095 | orchestrator | Saturday 20 September 2025 11:05:21 +0000 (0:00:31.602) 0:00:33.430 **** 2025-09-20 11:06:36.798099 | orchestrator | changed: [localhost] 2025-09-20 11:06:36.798104 | orchestrator | 2025-09-20 11:06:36.798108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:06:36.798112 | orchestrator | 2025-09-20 11:06:36.798116 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:06:36.798120 | orchestrator | Saturday 20 September 2025 11:05:25 +0000 (0:00:04.648) 0:00:38.079 **** 2025-09-20 11:06:36.798123 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:36.798127 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:36.798131 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:36.798135 | orchestrator | 2025-09-20 11:06:36.798139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:06:36.798143 | orchestrator | Saturday 20 September 2025 11:05:26 +0000 (0:00:00.306) 0:00:38.385 **** 2025-09-20 11:06:36.798147 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-20 11:06:36.798151 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-20 11:06:36.798155 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-20 11:06:36.798159 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-20 11:06:36.798163 | orchestrator | 2025-09-20 11:06:36.798167 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-20 11:06:36.798171 | orchestrator | skipping: no hosts matched 2025-09-20 11:06:36.798175 | orchestrator | 2025-09-20 11:06:36.798181 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:06:36.798188 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:06:36.798197 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:06:36.798205 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:06:36.798210 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:06:36.798236 | orchestrator | 2025-09-20 11:06:36.798240 | orchestrator | 2025-09-20 11:06:36.798244 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:06:36.798248 | orchestrator | Saturday 20 September 2025 11:05:26 +0000 (0:00:00.421) 0:00:38.807 **** 2025-09-20 11:06:36.798252 | orchestrator | =============================================================================== 2025-09-20 11:06:36.798255 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.60s 2025-09-20 11:06:36.798269 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.65s 2025-09-20 11:06:36.798273 | orchestrator | Ensure the destination directory exists --------------------------------- 1.44s 2025-09-20 11:06:36.798277 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-20 11:06:36.798281 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-20 11:06:36.798284 | orchestrator | 2025-09-20 11:06:36.798288 | orchestrator | 2025-09-20 11:06:36.798292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:06:36.798296 | orchestrator | 2025-09-20 11:06:36.798300 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:06:36.798304 | orchestrator | Saturday 20 September 2025 11:02:29 +0000 (0:00:00.266) 0:00:00.266 **** 2025-09-20 11:06:36.798307 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:36.798311 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:36.798315 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:36.798319 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:06:36.798322 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:06:36.798326 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:06:36.798330 | orchestrator | 2025-09-20 11:06:36.798334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:06:36.798338 | orchestrator | Saturday 20 September 2025 11:02:30 +0000 (0:00:00.606) 0:00:00.872 **** 2025-09-20 11:06:36.798341 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-20 11:06:36.798345 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-20 11:06:36.798349 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-20 11:06:36.798353 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-20 11:06:36.798357 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-20 11:06:36.798361 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-20 11:06:36.798365 | orchestrator | 2025-09-20 11:06:36.798368 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-20 11:06:36.798372 | orchestrator | 2025-09-20 11:06:36.798376 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 11:06:36.798380 | orchestrator | Saturday 20 September 2025 11:02:30 +0000 (0:00:00.526) 0:00:01.399 **** 2025-09-20 11:06:36.798394 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:06:36.798398 | orchestrator | 2025-09-20 11:06:36.798402 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-20 11:06:36.798406 | orchestrator | Saturday 20 September 2025 11:02:31 +0000 (0:00:01.056) 0:00:02.455 **** 2025-09-20 11:06:36.798409 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:36.798413 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:36.798417 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:06:36.798421 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:36.798425 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:06:36.798428 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:06:36.798432 | orchestrator | 2025-09-20 11:06:36.798436 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-20 11:06:36.798440 | orchestrator | Saturday 20 September 2025 11:02:32 +0000 (0:00:01.152) 0:00:03.608 **** 2025-09-20 11:06:36.798447 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:36.798451 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:36.798455 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:36.798459 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:06:36.798463 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:06:36.798466 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:06:36.798470 | orchestrator | 2025-09-20 11:06:36.798474 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-20 11:06:36.798478 | orchestrator | Saturday 20 September 2025 11:02:33 +0000 (0:00:00.988) 0:00:04.596 **** 2025-09-20 11:06:36.798482 | orchestrator | ok: [testbed-node-0] => { 2025-09-20 11:06:36.798515 | orchestrator |  "changed": false, 2025-09-20 11:06:36.798521 | orchestrator |  "msg": "All assertions passed" 2025-09-20 11:06:36.798526 | orchestrator | } 2025-09-20 11:06:36.798531 | orchestrator | ok: [testbed-node-1] => { 2025-09-20 11:06:36.798535 | orchestrator |  "changed": false, 2025-09-20 11:06:36.798539 | orchestrator |  "msg": "All assertions passed" 2025-09-20 11:06:36.798543 | orchestrator | } 2025-09-20 11:06:36.798547 | orchestrator | ok: [testbed-node-2] => { 2025-09-20 11:06:36.798552 | orchestrator |  "changed": false, 2025-09-20 11:06:36.798556 | orchestrator |  "msg": "All assertions passed" 2025-09-20 11:06:36.798560 | orchestrator | } 2025-09-20 11:06:36.798565 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 11:06:36.798569 | orchestrator |  "changed": false, 2025-09-20 11:06:36.798574 | orchestrator |  "msg": "All assertions passed" 2025-09-20 11:06:36.798578 | orchestrator | } 2025-09-20 11:06:36.798582 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 11:06:36.798587 | orchestrator |  "changed": false, 2025-09-20 11:06:36.798591 | orchestrator |  "msg": "All assertions passed" 2025-09-20 11:06:36.798595 | orchestrator | } 2025-09-20 11:06:36.798600 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 11:06:36.798604 | orchestrator |  "changed": false, 2025-09-20 11:06:36.798697 | orchestrator |  "msg": "All assertions passed" 2025-09-20 11:06:36.798702 | orchestrator | } 2025-09-20 11:06:36.798706 | orchestrator | 2025-09-20 11:06:36.798711 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-20 11:06:36.798715 | orchestrator | Saturday 20 September 2025 11:02:34 +0000 (0:00:00.679) 0:00:05.275 **** 2025-09-20 11:06:36.798738 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.798743 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.798747 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.798751 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.798755 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.798759 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.798764 | orchestrator | 2025-09-20 11:06:36.798768 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-20 11:06:36.798773 | orchestrator | Saturday 20 September 2025 11:02:35 +0000 (0:00:00.568) 0:00:05.844 **** 2025-09-20 11:06:36.798777 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-20 11:06:36.798781 | orchestrator | 2025-09-20 11:06:36.798786 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-20 11:06:36.798793 | orchestrator | Saturday 20 September 2025 11:02:38 +0000 (0:00:03.166) 0:00:09.010 **** 2025-09-20 11:06:36.798798 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-20 11:06:36.798803 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-20 11:06:36.798807 | orchestrator | 2025-09-20 11:06:36.798812 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-20 11:06:36.798816 | orchestrator | Saturday 20 September 2025 11:02:44 +0000 (0:00:06.119) 0:00:15.129 **** 2025-09-20 11:06:36.798820 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:06:36.798825 | orchestrator | 2025-09-20 11:06:36.798841 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-20 11:06:36.798846 | orchestrator | Saturday 20 September 2025 11:02:47 +0000 (0:00:02.999) 0:00:18.129 **** 2025-09-20 11:06:36.798853 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:06:36.798858 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-20 11:06:36.798862 | orchestrator | 2025-09-20 11:06:36.798867 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-20 11:06:36.798871 | orchestrator | Saturday 20 September 2025 11:02:51 +0000 (0:00:03.522) 0:00:21.652 **** 2025-09-20 11:06:36.798875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:06:36.798880 | orchestrator | 2025-09-20 11:06:36.798884 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-20 11:06:36.798889 | orchestrator | Saturday 20 September 2025 11:02:54 +0000 (0:00:03.041) 0:00:24.693 **** 2025-09-20 11:06:36.798893 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-20 11:06:36.798897 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-20 11:06:36.798901 | orchestrator | 2025-09-20 11:06:36.798906 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 11:06:36.798910 | orchestrator | Saturday 20 September 2025 11:03:01 +0000 (0:00:07.157) 0:00:31.851 **** 2025-09-20 11:06:36.798914 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.798917 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.798926 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.798930 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.798933 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.798937 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.798941 | orchestrator | 2025-09-20 11:06:36.798945 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-20 11:06:36.798948 | orchestrator | Saturday 20 September 2025 11:03:02 +0000 (0:00:00.771) 0:00:32.623 **** 2025-09-20 11:06:36.798952 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.798956 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.798960 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.798964 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.798967 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.798971 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.798975 | orchestrator | 2025-09-20 11:06:36.798979 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-20 11:06:36.798982 | orchestrator | Saturday 20 September 2025 11:03:04 +0000 (0:00:02.267) 0:00:34.891 **** 2025-09-20 11:06:36.798986 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:36.798990 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:36.798994 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:36.798998 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:06:36.799001 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:06:36.799005 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:06:36.799009 | orchestrator | 2025-09-20 11:06:36.799013 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-20 11:06:36.799016 | orchestrator | Saturday 20 September 2025 11:03:05 +0000 (0:00:01.065) 0:00:35.956 **** 2025-09-20 11:06:36.799020 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799024 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799028 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799031 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799035 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799039 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799042 | orchestrator | 2025-09-20 11:06:36.799046 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-20 11:06:36.799050 | orchestrator | Saturday 20 September 2025 11:03:07 +0000 (0:00:02.314) 0:00:38.271 **** 2025-09-20 11:06:36.799056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799121 | orchestrator | 2025-09-20 11:06:36.799125 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-20 11:06:36.799137 | orchestrator | Saturday 20 September 2025 11:03:10 +0000 (0:00:02.937) 0:00:41.208 **** 2025-09-20 11:06:36.799141 | orchestrator | [WARNING]: Skipped 2025-09-20 11:06:36.799146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-20 11:06:36.799149 | orchestrator | due to this access issue: 2025-09-20 11:06:36.799156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-20 11:06:36.799160 | orchestrator | a directory 2025-09-20 11:06:36.799164 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:06:36.799168 | orchestrator | 2025-09-20 11:06:36.799171 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 11:06:36.799175 | orchestrator | Saturday 20 September 2025 11:03:11 +0000 (0:00:00.895) 0:00:42.103 **** 2025-09-20 11:06:36.799181 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:06:36.799189 | orchestrator | 2025-09-20 11:06:36.799195 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-20 11:06:36.799202 | orchestrator | Saturday 20 September 2025 11:03:12 +0000 (0:00:01.424) 0:00:43.528 **** 2025-09-20 11:06:36.799207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799266 | orchestrator | 2025-09-20 11:06:36.799270 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-20 11:06:36.799277 | orchestrator | Saturday 20 September 2025 11:03:16 +0000 (0:00:03.547) 0:00:47.076 **** 2025-09-20 11:06:36.799281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799288 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799296 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799308 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799316 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799329 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799340 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799344 | orchestrator | 2025-09-20 11:06:36.799348 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-20 11:06:36.799352 | orchestrator | Saturday 20 September 2025 11:03:19 +0000 (0:00:02.714) 0:00:49.790 **** 2025-09-20 11:06:36.799356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799360 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799373 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799381 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799396 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799404 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799412 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799416 | orchestrator | 2025-09-20 11:06:36.799420 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-20 11:06:36.799424 | orchestrator | Saturday 20 September 2025 11:03:21 +0000 (0:00:02.561) 0:00:52.352 **** 2025-09-20 11:06:36.799427 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799433 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799437 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799441 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799445 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799449 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799452 | orchestrator | 2025-09-20 11:06:36.799456 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-20 11:06:36.799460 | orchestrator | Saturday 20 September 2025 11:03:23 +0000 (0:00:02.145) 0:00:54.498 **** 2025-09-20 11:06:36.799464 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799468 | orchestrator | 2025-09-20 11:06:36.799472 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-20 11:06:36.799475 | orchestrator | Saturday 20 September 2025 11:03:24 +0000 (0:00:00.128) 0:00:54.626 **** 2025-09-20 11:06:36.799479 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799483 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799487 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799491 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799494 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799498 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799502 | orchestrator | 2025-09-20 11:06:36.799506 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-20 11:06:36.799513 | orchestrator | Saturday 20 September 2025 11:03:24 +0000 (0:00:00.754) 0:00:55.381 **** 2025-09-20 11:06:36.799749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799758 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799767 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799775 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799786 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799799 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799812 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799816 | orchestrator | 2025-09-20 11:06:36.799820 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-20 11:06:36.799824 | orchestrator | Saturday 20 September 2025 11:03:28 +0000 (0:00:03.291) 0:00:58.673 **** 2025-09-20 11:06:36.799828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799863 | orchestrator | 2025-09-20 11:06:36.799867 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-20 11:06:36.799871 | orchestrator | Saturday 20 September 2025 11:03:31 +0000 (0:00:03.542) 0:01:02.215 **** 2025-09-20 11:06:36.799875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.799903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.799907 | orchestrator | 2025-09-20 11:06:36.799911 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-20 11:06:36.799918 | orchestrator | Saturday 20 September 2025 11:03:36 +0000 (0:00:04.798) 0:01:07.014 **** 2025-09-20 11:06:36.799924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799928 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.799934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799938 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799946 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.799950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799954 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.799960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.799967 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.799971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.799975 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799979 | orchestrator | 2025-09-20 11:06:36.799983 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-20 11:06:36.799987 | orchestrator | Saturday 20 September 2025 11:03:39 +0000 (0:00:02.678) 0:01:09.692 **** 2025-09-20 11:06:36.799990 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.799994 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.799998 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800002 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.800005 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:36.800009 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:36.800013 | orchestrator | 2025-09-20 11:06:36.800017 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-20 11:06:36.800022 | orchestrator | Saturday 20 September 2025 11:03:41 +0000 (0:00:02.497) 0:01:12.190 **** 2025-09-20 11:06:36.800027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800031 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800041 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800049 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.800084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.800090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.800094 | orchestrator | 2025-09-20 11:06:36.800098 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-20 11:06:36.800102 | orchestrator | Saturday 20 September 2025 11:03:45 +0000 (0:00:03.981) 0:01:16.171 **** 2025-09-20 11:06:36.800106 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800110 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800114 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800121 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800125 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800129 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800132 | orchestrator | 2025-09-20 11:06:36.800136 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-20 11:06:36.800140 | orchestrator | Saturday 20 September 2025 11:03:47 +0000 (0:00:02.343) 0:01:18.515 **** 2025-09-20 11:06:36.800144 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800148 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800151 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800155 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800159 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800163 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800166 | orchestrator | 2025-09-20 11:06:36.800170 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-20 11:06:36.800174 | orchestrator | Saturday 20 September 2025 11:03:50 +0000 (0:00:02.543) 0:01:21.058 **** 2025-09-20 11:06:36.800180 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800186 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800192 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800198 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800223 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800230 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800236 | orchestrator | 2025-09-20 11:06:36.800244 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-20 11:06:36.800248 | orchestrator | Saturday 20 September 2025 11:03:53 +0000 (0:00:03.419) 0:01:24.477 **** 2025-09-20 11:06:36.800252 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800255 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800259 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800263 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800267 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800270 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800274 | orchestrator | 2025-09-20 11:06:36.800281 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-20 11:06:36.800285 | orchestrator | Saturday 20 September 2025 11:03:56 +0000 (0:00:02.815) 0:01:27.293 **** 2025-09-20 11:06:36.800289 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800293 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800297 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800301 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800304 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800308 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800312 | orchestrator | 2025-09-20 11:06:36.800316 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-20 11:06:36.800320 | orchestrator | Saturday 20 September 2025 11:03:58 +0000 (0:00:01.855) 0:01:29.148 **** 2025-09-20 11:06:36.800323 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800327 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800331 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800335 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800338 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800342 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800346 | orchestrator | 2025-09-20 11:06:36.800350 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-20 11:06:36.800355 | orchestrator | Saturday 20 September 2025 11:04:00 +0000 (0:00:02.377) 0:01:31.526 **** 2025-09-20 11:06:36.800361 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 11:06:36.800367 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800373 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 11:06:36.800380 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800390 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 11:06:36.800396 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 11:06:36.800402 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800408 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800419 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 11:06:36.800425 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800431 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 11:06:36.800437 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800444 | orchestrator | 2025-09-20 11:06:36.800451 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-20 11:06:36.800458 | orchestrator | Saturday 20 September 2025 11:04:03 +0000 (0:00:02.473) 0:01:33.999 **** 2025-09-20 11:06:36.800465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.800471 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.800484 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.800499 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800517 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800535 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800547 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800552 | orchestrator | 2025-09-20 11:06:36.800559 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-20 11:06:36.800565 | orchestrator | Saturday 20 September 2025 11:04:05 +0000 (0:00:02.510) 0:01:36.510 **** 2025-09-20 11:06:36.800571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.800578 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800598 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.800676 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800689 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.800701 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.800716 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800727 | orchestrator | 2025-09-20 11:06:36.800734 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-20 11:06:36.800740 | orchestrator | Saturday 20 September 2025 11:04:08 +0000 (0:00:02.917) 0:01:39.427 **** 2025-09-20 11:06:36.800746 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800752 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800758 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800763 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800770 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800776 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800781 | orchestrator | 2025-09-20 11:06:36.800788 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-20 11:06:36.800794 | orchestrator | Saturday 20 September 2025 11:04:11 +0000 (0:00:02.319) 0:01:41.747 **** 2025-09-20 11:06:36.800800 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800806 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800812 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800818 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:06:36.800823 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:06:36.800830 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:06:36.800835 | orchestrator | 2025-09-20 11:06:36.800841 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-20 11:06:36.800848 | orchestrator | Saturday 20 September 2025 11:04:16 +0000 (0:00:05.203) 0:01:46.951 **** 2025-09-20 11:06:36.800854 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800860 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800866 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800872 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800878 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800884 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800890 | orchestrator | 2025-09-20 11:06:36.800896 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-20 11:06:36.800902 | orchestrator | Saturday 20 September 2025 11:04:19 +0000 (0:00:03.115) 0:01:50.066 **** 2025-09-20 11:06:36.800908 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800914 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800920 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800930 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800936 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800943 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.800948 | orchestrator | 2025-09-20 11:06:36.800955 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-20 11:06:36.800961 | orchestrator | Saturday 20 September 2025 11:04:22 +0000 (0:00:02.829) 0:01:52.895 **** 2025-09-20 11:06:36.800967 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.800973 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.800979 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.800985 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.800991 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.800997 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801003 | orchestrator | 2025-09-20 11:06:36.801009 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-20 11:06:36.801015 | orchestrator | Saturday 20 September 2025 11:04:25 +0000 (0:00:03.559) 0:01:56.455 **** 2025-09-20 11:06:36.801022 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801028 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801034 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801040 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801046 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801051 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801058 | orchestrator | 2025-09-20 11:06:36.801088 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-20 11:06:36.801095 | orchestrator | Saturday 20 September 2025 11:04:28 +0000 (0:00:02.248) 0:01:58.704 **** 2025-09-20 11:06:36.801107 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801114 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801120 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801126 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801132 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801138 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801144 | orchestrator | 2025-09-20 11:06:36.801150 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-20 11:06:36.801156 | orchestrator | Saturday 20 September 2025 11:04:32 +0000 (0:00:04.351) 0:02:03.055 **** 2025-09-20 11:06:36.801162 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801168 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801174 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801180 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801186 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801192 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801198 | orchestrator | 2025-09-20 11:06:36.801204 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-20 11:06:36.801210 | orchestrator | Saturday 20 September 2025 11:04:36 +0000 (0:00:03.696) 0:02:06.752 **** 2025-09-20 11:06:36.801216 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801222 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801228 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801234 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801240 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801246 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801252 | orchestrator | 2025-09-20 11:06:36.801258 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-20 11:06:36.801264 | orchestrator | Saturday 20 September 2025 11:04:38 +0000 (0:00:02.673) 0:02:09.425 **** 2025-09-20 11:06:36.801270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 11:06:36.801277 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801283 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 11:06:36.801289 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801298 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 11:06:36.801305 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801311 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 11:06:36.801318 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801324 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 11:06:36.801330 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801336 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 11:06:36.801342 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801349 | orchestrator | 2025-09-20 11:06:36.801355 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-20 11:06:36.801361 | orchestrator | Saturday 20 September 2025 11:04:42 +0000 (0:00:03.931) 0:02:13.357 **** 2025-09-20 11:06:36.801373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.801384 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.801397 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.801410 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 11:06:36.801426 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.801439 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 11:06:36.801459 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801466 | orchestrator | 2025-09-20 11:06:36.801471 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-20 11:06:36.801477 | orchestrator | Saturday 20 September 2025 11:04:45 +0000 (0:00:02.855) 0:02:16.212 **** 2025-09-20 11:06:36.801483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.801490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.801502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.801509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.801524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 11:06:36.801531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 11:06:36.801537 | orchestrator | 2025-09-20 11:06:36.801543 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 11:06:36.801550 | orchestrator | Saturday 20 September 2025 11:04:49 +0000 (0:00:04.217) 0:02:20.429 **** 2025-09-20 11:06:36.801556 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.801563 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.801569 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.801575 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:06:36.801582 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:06:36.801588 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:06:36.801595 | orchestrator | 2025-09-20 11:06:36.801601 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-20 11:06:36.801609 | orchestrator | Saturday 20 September 2025 11:04:50 +0000 (0:00:00.593) 0:02:21.023 **** 2025-09-20 11:06:36.801615 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.801622 | orchestrator | 2025-09-20 11:06:36.801629 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-20 11:06:36.801633 | orchestrator | Saturday 20 September 2025 11:04:52 +0000 (0:00:02.058) 0:02:23.081 **** 2025-09-20 11:06:36.801637 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.801640 | orchestrator | 2025-09-20 11:06:36.801644 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-20 11:06:36.801648 | orchestrator | Saturday 20 September 2025 11:04:54 +0000 (0:00:02.152) 0:02:25.234 **** 2025-09-20 11:06:36.801652 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.801655 | orchestrator | 2025-09-20 11:06:36.801659 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 11:06:36.801663 | orchestrator | Saturday 20 September 2025 11:05:36 +0000 (0:00:41.479) 0:03:06.715 **** 2025-09-20 11:06:36.801667 | orchestrator | 2025-09-20 11:06:36.801671 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 11:06:36.801674 | orchestrator | Saturday 20 September 2025 11:05:36 +0000 (0:00:00.151) 0:03:06.867 **** 2025-09-20 11:06:36.801682 | orchestrator | 2025-09-20 11:06:36.801688 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 11:06:36.801692 | orchestrator | Saturday 20 September 2025 11:05:36 +0000 (0:00:00.453) 0:03:07.320 **** 2025-09-20 11:06:36.801696 | orchestrator | 2025-09-20 11:06:36.801700 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 11:06:36.801703 | orchestrator | Saturday 20 September 2025 11:05:36 +0000 (0:00:00.133) 0:03:07.453 **** 2025-09-20 11:06:36.801707 | orchestrator | 2025-09-20 11:06:36.801711 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 11:06:36.801715 | orchestrator | Saturday 20 September 2025 11:05:36 +0000 (0:00:00.071) 0:03:07.525 **** 2025-09-20 11:06:36.801719 | orchestrator | 2025-09-20 11:06:36.801722 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 11:06:36.801726 | orchestrator | Saturday 20 September 2025 11:05:37 +0000 (0:00:00.159) 0:03:07.684 **** 2025-09-20 11:06:36.801730 | orchestrator | 2025-09-20 11:06:36.801734 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-20 11:06:36.801737 | orchestrator | Saturday 20 September 2025 11:05:37 +0000 (0:00:00.132) 0:03:07.817 **** 2025-09-20 11:06:36.801741 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.801745 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:36.801749 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:36.801753 | orchestrator | 2025-09-20 11:06:36.801757 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-20 11:06:36.801760 | orchestrator | Saturday 20 September 2025 11:06:05 +0000 (0:00:28.774) 0:03:36.592 **** 2025-09-20 11:06:36.801764 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:06:36.801768 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:06:36.801772 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:06:36.801776 | orchestrator | 2025-09-20 11:06:36.801779 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:06:36.801783 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 11:06:36.801789 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-20 11:06:36.801796 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-20 11:06:36.801800 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 11:06:36.801804 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 11:06:36.801807 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 11:06:36.801811 | orchestrator | 2025-09-20 11:06:36.801815 | orchestrator | 2025-09-20 11:06:36.801819 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:06:36.801823 | orchestrator | Saturday 20 September 2025 11:06:33 +0000 (0:00:27.860) 0:04:04.453 **** 2025-09-20 11:06:36.801827 | orchestrator | =============================================================================== 2025-09-20 11:06:36.801831 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.48s 2025-09-20 11:06:36.801834 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.77s 2025-09-20 11:06:36.801838 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 27.86s 2025-09-20 11:06:36.801842 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.16s 2025-09-20 11:06:36.801846 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.12s 2025-09-20 11:06:36.801853 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.20s 2025-09-20 11:06:36.801856 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.80s 2025-09-20 11:06:36.801860 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.35s 2025-09-20 11:06:36.801864 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.22s 2025-09-20 11:06:36.801868 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.98s 2025-09-20 11:06:36.801871 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.93s 2025-09-20 11:06:36.801875 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.70s 2025-09-20 11:06:36.801879 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.56s 2025-09-20 11:06:36.801883 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.55s 2025-09-20 11:06:36.801886 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.54s 2025-09-20 11:06:36.801890 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.52s 2025-09-20 11:06:36.801894 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.42s 2025-09-20 11:06:36.801898 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.29s 2025-09-20 11:06:36.801901 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.17s 2025-09-20 11:06:36.801905 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.12s 2025-09-20 11:06:36.801911 | orchestrator | 2025-09-20 11:06:36 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:36.801916 | orchestrator | 2025-09-20 11:06:36 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state STARTED 2025-09-20 11:06:36.803590 | orchestrator | 2025-09-20 11:06:36.803616 | orchestrator | 2025-09-20 11:06:36.803622 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:06:36.803628 | orchestrator | 2025-09-20 11:06:36.803633 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:06:36.803639 | orchestrator | Saturday 20 September 2025 11:05:31 +0000 (0:00:00.251) 0:00:00.251 **** 2025-09-20 11:06:36.803644 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:36.803650 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:36.803656 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:36.803661 | orchestrator | 2025-09-20 11:06:36.803667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:06:36.803674 | orchestrator | Saturday 20 September 2025 11:05:32 +0000 (0:00:00.284) 0:00:00.536 **** 2025-09-20 11:06:36.803680 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-20 11:06:36.803686 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-20 11:06:36.803692 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-20 11:06:36.803698 | orchestrator | 2025-09-20 11:06:36.803703 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-20 11:06:36.803709 | orchestrator | 2025-09-20 11:06:36.803715 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-20 11:06:36.803720 | orchestrator | Saturday 20 September 2025 11:05:32 +0000 (0:00:00.347) 0:00:00.883 **** 2025-09-20 11:06:36.803726 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:06:36.803732 | orchestrator | 2025-09-20 11:06:36.803738 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-20 11:06:36.803743 | orchestrator | Saturday 20 September 2025 11:05:32 +0000 (0:00:00.489) 0:00:01.373 **** 2025-09-20 11:06:36.803749 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-20 11:06:36.803755 | orchestrator | 2025-09-20 11:06:36.803761 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-20 11:06:36.803779 | orchestrator | Saturday 20 September 2025 11:05:36 +0000 (0:00:03.206) 0:00:04.579 **** 2025-09-20 11:06:36.803786 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-20 11:06:36.803792 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-20 11:06:36.803798 | orchestrator | 2025-09-20 11:06:36.803804 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-20 11:06:36.803810 | orchestrator | Saturday 20 September 2025 11:05:42 +0000 (0:00:06.296) 0:00:10.875 **** 2025-09-20 11:06:36.803816 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:06:36.803821 | orchestrator | 2025-09-20 11:06:36.803828 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-20 11:06:36.803834 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:03.041) 0:00:13.917 **** 2025-09-20 11:06:36.803839 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:06:36.803845 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-20 11:06:36.803851 | orchestrator | 2025-09-20 11:06:36.803856 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-20 11:06:36.803862 | orchestrator | Saturday 20 September 2025 11:05:49 +0000 (0:00:03.705) 0:00:17.622 **** 2025-09-20 11:06:36.803868 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:06:36.803874 | orchestrator | 2025-09-20 11:06:36.803879 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-20 11:06:36.803885 | orchestrator | Saturday 20 September 2025 11:05:52 +0000 (0:00:03.262) 0:00:20.885 **** 2025-09-20 11:06:36.803891 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-20 11:06:36.803897 | orchestrator | 2025-09-20 11:06:36.803903 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-20 11:06:36.803908 | orchestrator | Saturday 20 September 2025 11:05:56 +0000 (0:00:03.877) 0:00:24.763 **** 2025-09-20 11:06:36.803914 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.803920 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.803926 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.803932 | orchestrator | 2025-09-20 11:06:36.803938 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-20 11:06:36.803944 | orchestrator | Saturday 20 September 2025 11:05:56 +0000 (0:00:00.291) 0:00:25.054 **** 2025-09-20 11:06:36.803958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.803974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.803985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.803991 | orchestrator | 2025-09-20 11:06:36.803997 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-20 11:06:36.804003 | orchestrator | Saturday 20 September 2025 11:05:57 +0000 (0:00:00.811) 0:00:25.866 **** 2025-09-20 11:06:36.804009 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.804015 | orchestrator | 2025-09-20 11:06:36.804020 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-20 11:06:36.804026 | orchestrator | Saturday 20 September 2025 11:05:57 +0000 (0:00:00.140) 0:00:26.006 **** 2025-09-20 11:06:36.804032 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.804038 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.804044 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.804050 | orchestrator | 2025-09-20 11:06:36.804055 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-20 11:06:36.804061 | orchestrator | Saturday 20 September 2025 11:05:57 +0000 (0:00:00.453) 0:00:26.460 **** 2025-09-20 11:06:36.804095 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:06:36.804101 | orchestrator | 2025-09-20 11:06:36.804107 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-20 11:06:36.804113 | orchestrator | Saturday 20 September 2025 11:05:58 +0000 (0:00:00.540) 0:00:27.001 **** 2025-09-20 11:06:36.804119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804151 | orchestrator | 2025-09-20 11:06:36.804157 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-20 11:06:36.804163 | orchestrator | Saturday 20 September 2025 11:06:00 +0000 (0:00:01.492) 0:00:28.493 **** 2025-09-20 11:06:36.804169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804175 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.804181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804187 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.804200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804213 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.804220 | orchestrator | 2025-09-20 11:06:36.804225 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-20 11:06:36.804229 | orchestrator | Saturday 20 September 2025 11:06:01 +0000 (0:00:01.903) 0:00:30.397 **** 2025-09-20 11:06:36.804234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804238 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.804243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804248 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.804252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804257 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.804261 | orchestrator | 2025-09-20 11:06:36.804266 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-20 11:06:36.804270 | orchestrator | Saturday 20 September 2025 11:06:03 +0000 (0:00:01.186) 0:00:31.583 **** 2025-09-20 11:06:36.804281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804299 | orchestrator | 2025-09-20 11:06:36.804304 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-20 11:06:36.804308 | orchestrator | Saturday 20 September 2025 11:06:04 +0000 (0:00:01.340) 0:00:32.924 **** 2025-09-20 11:06:36.804313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804341 | orchestrator | 2025-09-20 11:06:36.804345 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-20 11:06:36.804349 | orchestrator | Saturday 20 September 2025 11:06:06 +0000 (0:00:01.940) 0:00:34.865 **** 2025-09-20 11:06:36.804354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-20 11:06:36.804358 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-20 11:06:36.804362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-20 11:06:36.804367 | orchestrator | 2025-09-20 11:06:36.804371 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-20 11:06:36.804375 | orchestrator | Saturday 20 September 2025 11:06:08 +0000 (0:00:01.669) 0:00:36.534 **** 2025-09-20 11:06:36.804380 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:36.804384 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:36.804388 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.804392 | orchestrator | 2025-09-20 11:06:36.804397 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-20 11:06:36.804401 | orchestrator | Saturday 20 September 2025 11:06:09 +0000 (0:00:01.274) 0:00:37.809 **** 2025-09-20 11:06:36.804405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804410 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:36.804414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804422 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:36.804432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 11:06:36.804437 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:36.804442 | orchestrator | 2025-09-20 11:06:36.804446 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-20 11:06:36.804451 | orchestrator | Saturday 20 September 2025 11:06:09 +0000 (0:00:00.599) 0:00:38.409 **** 2025-09-20 11:06:36.804455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 11:06:36.804472 | orchestrator | 2025-09-20 11:06:36.804476 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-20 11:06:36.804480 | orchestrator | Saturday 20 September 2025 11:06:11 +0000 (0:00:01.364) 0:00:39.774 **** 2025-09-20 11:06:36.804485 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.804489 | orchestrator | 2025-09-20 11:06:36.804493 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-20 11:06:36.804498 | orchestrator | Saturday 20 September 2025 11:06:13 +0000 (0:00:02.323) 0:00:42.097 **** 2025-09-20 11:06:36.804502 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.804506 | orchestrator | 2025-09-20 11:06:36.804511 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-20 11:06:36.804515 | orchestrator | Saturday 20 September 2025 11:06:15 +0000 (0:00:02.123) 0:00:44.220 **** 2025-09-20 11:06:36.804519 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.804524 | orchestrator | 2025-09-20 11:06:36.804528 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-20 11:06:36.804531 | orchestrator | Saturday 20 September 2025 11:06:28 +0000 (0:00:12.657) 0:00:56.878 **** 2025-09-20 11:06:36.804535 | orchestrator | 2025-09-20 11:06:36.804541 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-20 11:06:36.804545 | orchestrator | Saturday 20 September 2025 11:06:28 +0000 (0:00:00.070) 0:00:56.948 **** 2025-09-20 11:06:36.804549 | orchestrator | 2025-09-20 11:06:36.804555 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-20 11:06:36.804559 | orchestrator | Saturday 20 September 2025 11:06:28 +0000 (0:00:00.075) 0:00:57.024 **** 2025-09-20 11:06:36.804563 | orchestrator | 2025-09-20 11:06:36.804566 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-20 11:06:36.804570 | orchestrator | Saturday 20 September 2025 11:06:28 +0000 (0:00:00.071) 0:00:57.095 **** 2025-09-20 11:06:36.804574 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:36.804578 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:36.804582 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:36.804585 | orchestrator | 2025-09-20 11:06:36.804589 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:06:36.804593 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 11:06:36.804597 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 11:06:36.804601 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 11:06:36.804605 | orchestrator | 2025-09-20 11:06:36.804609 | orchestrator | 2025-09-20 11:06:36.804613 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:06:36.804616 | orchestrator | Saturday 20 September 2025 11:06:34 +0000 (0:00:05.452) 0:01:02.548 **** 2025-09-20 11:06:36.804620 | orchestrator | =============================================================================== 2025-09-20 11:06:36.804627 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.66s 2025-09-20 11:06:36.804631 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.30s 2025-09-20 11:06:36.804635 | orchestrator | placement : Restart placement-api container ----------------------------- 5.45s 2025-09-20 11:06:36.804638 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.88s 2025-09-20 11:06:36.804642 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.71s 2025-09-20 11:06:36.804646 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.26s 2025-09-20 11:06:36.804649 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.21s 2025-09-20 11:06:36.804653 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.04s 2025-09-20 11:06:36.804657 | orchestrator | placement : Creating placement databases -------------------------------- 2.32s 2025-09-20 11:06:36.804661 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.12s 2025-09-20 11:06:36.804664 | orchestrator | placement : Copying over placement.conf --------------------------------- 1.94s 2025-09-20 11:06:36.804668 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.90s 2025-09-20 11:06:36.804672 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.67s 2025-09-20 11:06:36.804676 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.49s 2025-09-20 11:06:36.804679 | orchestrator | placement : Check placement containers ---------------------------------- 1.36s 2025-09-20 11:06:36.804683 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2025-09-20 11:06:36.804687 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.27s 2025-09-20 11:06:36.804691 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.19s 2025-09-20 11:06:36.804694 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.81s 2025-09-20 11:06:36.804698 | orchestrator | placement : Copying over existing policy file --------------------------- 0.60s 2025-09-20 11:06:36.804702 | orchestrator | 2025-09-20 11:06:36 | INFO  | Task 45b38e57-e585-45d6-b874-9995e9e24b3f is in state SUCCESS 2025-09-20 11:06:36.804853 | orchestrator | 2025-09-20 11:06:36 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:36.806461 | orchestrator | 2025-09-20 11:06:36 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:36.806574 | orchestrator | 2025-09-20 11:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:39.853441 | orchestrator | 2025-09-20 11:06:39 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:39.855602 | orchestrator | 2025-09-20 11:06:39 | INFO  | Task 47038ae6-8234-49c8-92f9-a7a777a533c3 is in state SUCCESS 2025-09-20 11:06:39.857336 | orchestrator | 2025-09-20 11:06:39.857394 | orchestrator | 2025-09-20 11:06:39.857412 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:06:39.857430 | orchestrator | 2025-09-20 11:06:39.857446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:06:39.857463 | orchestrator | Saturday 20 September 2025 11:03:42 +0000 (0:00:00.388) 0:00:00.388 **** 2025-09-20 11:06:39.857481 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:06:39.857498 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:06:39.857514 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:06:39.857531 | orchestrator | 2025-09-20 11:06:39.857541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:06:39.857569 | orchestrator | Saturday 20 September 2025 11:03:42 +0000 (0:00:00.693) 0:00:01.082 **** 2025-09-20 11:06:39.857580 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-20 11:06:39.857590 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-20 11:06:39.857622 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-20 11:06:39.857632 | orchestrator | 2025-09-20 11:06:39.857642 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-20 11:06:39.857651 | orchestrator | 2025-09-20 11:06:39.857661 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 11:06:39.857671 | orchestrator | Saturday 20 September 2025 11:03:43 +0000 (0:00:00.532) 0:00:01.615 **** 2025-09-20 11:06:39.857680 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:06:39.857691 | orchestrator | 2025-09-20 11:06:39.857700 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-20 11:06:39.857710 | orchestrator | Saturday 20 September 2025 11:03:44 +0000 (0:00:01.107) 0:00:02.723 **** 2025-09-20 11:06:39.857720 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-20 11:06:39.857729 | orchestrator | 2025-09-20 11:06:39.857739 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-20 11:06:39.857749 | orchestrator | Saturday 20 September 2025 11:03:47 +0000 (0:00:03.217) 0:00:05.940 **** 2025-09-20 11:06:39.857759 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-20 11:06:39.857769 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-20 11:06:39.857778 | orchestrator | 2025-09-20 11:06:39.857788 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-20 11:06:39.857798 | orchestrator | Saturday 20 September 2025 11:03:53 +0000 (0:00:05.888) 0:00:11.829 **** 2025-09-20 11:06:39.857808 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:06:39.857817 | orchestrator | 2025-09-20 11:06:39.857827 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-20 11:06:39.857837 | orchestrator | Saturday 20 September 2025 11:03:56 +0000 (0:00:02.980) 0:00:14.810 **** 2025-09-20 11:06:39.857848 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:06:39.857858 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-20 11:06:39.857867 | orchestrator | 2025-09-20 11:06:39.857877 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-20 11:06:39.857887 | orchestrator | Saturday 20 September 2025 11:04:00 +0000 (0:00:03.601) 0:00:18.411 **** 2025-09-20 11:06:39.857896 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:06:39.857906 | orchestrator | 2025-09-20 11:06:39.858198 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-20 11:06:39.858218 | orchestrator | Saturday 20 September 2025 11:04:02 +0000 (0:00:02.699) 0:00:21.111 **** 2025-09-20 11:06:39.858229 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-20 11:06:39.858239 | orchestrator | 2025-09-20 11:06:39.858249 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-20 11:06:39.858259 | orchestrator | Saturday 20 September 2025 11:04:07 +0000 (0:00:04.273) 0:00:25.384 **** 2025-09-20 11:06:39.858273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.858318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.858337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.858349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858548 | orchestrator | 2025-09-20 11:06:39.858558 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-20 11:06:39.858568 | orchestrator | Saturday 20 September 2025 11:04:10 +0000 (0:00:03.391) 0:00:28.775 **** 2025-09-20 11:06:39.858578 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.858588 | orchestrator | 2025-09-20 11:06:39.858598 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-20 11:06:39.858608 | orchestrator | Saturday 20 September 2025 11:04:10 +0000 (0:00:00.106) 0:00:28.882 **** 2025-09-20 11:06:39.858617 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.858627 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:39.858637 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:39.858647 | orchestrator | 2025-09-20 11:06:39.858674 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 11:06:39.858684 | orchestrator | Saturday 20 September 2025 11:04:10 +0000 (0:00:00.256) 0:00:29.139 **** 2025-09-20 11:06:39.858694 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:06:39.858711 | orchestrator | 2025-09-20 11:06:39.858721 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-20 11:06:39.858731 | orchestrator | Saturday 20 September 2025 11:04:11 +0000 (0:00:00.948) 0:00:30.087 **** 2025-09-20 11:06:39.858741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.858764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.858776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.858786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.858953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.859547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.859580 | orchestrator | 2025-09-20 11:06:39.859590 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-20 11:06:39.859600 | orchestrator | Saturday 20 September 2025 11:04:19 +0000 (0:00:07.240) 0:00:37.328 **** 2025-09-20 11:06:39.859611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.859622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.859676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859728 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:39.859739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.859749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.859786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859886 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.859897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.859908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.859950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.859987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860007 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:39.860017 | orchestrator | 2025-09-20 11:06:39.860027 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-20 11:06:39.860037 | orchestrator | Saturday 20 September 2025 11:04:20 +0000 (0:00:01.208) 0:00:38.536 **** 2025-09-20 11:06:39.860047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.860058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.860170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860235 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.860248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.860259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.860300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860360 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:39.860372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.860385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.860396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.860487 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:39.860498 | orchestrator | 2025-09-20 11:06:39.860509 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-20 11:06:39.860519 | orchestrator | Saturday 20 September 2025 11:04:22 +0000 (0:00:02.492) 0:00:41.029 **** 2025-09-20 11:06:39.860529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.860540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.860580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.860592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860880 | orchestrator | 2025-09-20 11:06:39.860890 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-20 11:06:39.860900 | orchestrator | Saturday 20 September 2025 11:04:31 +0000 (0:00:08.512) 0:00:49.542 **** 2025-09-20 11:06:39.860911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.860921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.860932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.860952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.860993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861174 | orchestrator | 2025-09-20 11:06:39.861185 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-20 11:06:39.861195 | orchestrator | Saturday 20 September 2025 11:04:52 +0000 (0:00:21.311) 0:01:10.853 **** 2025-09-20 11:06:39.861205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-20 11:06:39.861215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-20 11:06:39.861224 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-20 11:06:39.861234 | orchestrator | 2025-09-20 11:06:39.861244 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-20 11:06:39.861254 | orchestrator | Saturday 20 September 2025 11:04:57 +0000 (0:00:04.530) 0:01:15.383 **** 2025-09-20 11:06:39.861263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-20 11:06:39.861273 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-20 11:06:39.861282 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-20 11:06:39.861292 | orchestrator | 2025-09-20 11:06:39.861302 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-20 11:06:39.861312 | orchestrator | Saturday 20 September 2025 11:04:59 +0000 (0:00:02.781) 0:01:18.164 **** 2025-09-20 11:06:39.861322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.861333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.861356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.861371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861579 | orchestrator | 2025-09-20 11:06:39.861604 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-20 11:06:39.861624 | orchestrator | Saturday 20 September 2025 11:05:02 +0000 (0:00:02.709) 0:01:20.874 **** 2025-09-20 11:06:39.861642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.861659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.861689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.861726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.861938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.861992 | orchestrator | 2025-09-20 11:06:39.862008 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 11:06:39.862115 | orchestrator | Saturday 20 September 2025 11:05:05 +0000 (0:00:02.728) 0:01:23.603 **** 2025-09-20 11:06:39.862132 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.862147 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:39.862163 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:39.862177 | orchestrator | 2025-09-20 11:06:39.862193 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-20 11:06:39.862208 | orchestrator | Saturday 20 September 2025 11:05:05 +0000 (0:00:00.562) 0:01:24.165 **** 2025-09-20 11:06:39.862225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.862256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.862274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862364 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.862382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.862408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.862420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862472 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:39.862482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 11:06:39.862498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 11:06:39.862508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:06:39.862559 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:39.862569 | orchestrator | 2025-09-20 11:06:39.862579 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-20 11:06:39.862589 | orchestrator | Saturday 20 September 2025 11:05:07 +0000 (0:00:01.557) 0:01:25.722 **** 2025-09-20 11:06:39.862599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.862615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.862626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 11:06:39.862636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:06:39.862881 | orchestrator | 2025-09-20 11:06:39.862892 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 11:06:39.862902 | orchestrator | Saturday 20 September 2025 11:05:11 +0000 (0:00:04.379) 0:01:30.101 **** 2025-09-20 11:06:39.862912 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:06:39.862922 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:06:39.862932 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:06:39.862949 | orchestrator | 2025-09-20 11:06:39.862964 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-20 11:06:39.862975 | orchestrator | Saturday 20 September 2025 11:05:12 +0000 (0:00:00.601) 0:01:30.702 **** 2025-09-20 11:06:39.862985 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-20 11:06:39.862995 | orchestrator | 2025-09-20 11:06:39.863005 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-20 11:06:39.863015 | orchestrator | Saturday 20 September 2025 11:05:14 +0000 (0:00:02.035) 0:01:32.738 **** 2025-09-20 11:06:39.863024 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 11:06:39.863034 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-20 11:06:39.863044 | orchestrator | 2025-09-20 11:06:39.863054 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-20 11:06:39.863091 | orchestrator | Saturday 20 September 2025 11:05:16 +0000 (0:00:02.175) 0:01:34.913 **** 2025-09-20 11:06:39.863103 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863113 | orchestrator | 2025-09-20 11:06:39.863123 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-20 11:06:39.863133 | orchestrator | Saturday 20 September 2025 11:05:33 +0000 (0:00:17.338) 0:01:52.252 **** 2025-09-20 11:06:39.863143 | orchestrator | 2025-09-20 11:06:39.863153 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-20 11:06:39.863162 | orchestrator | Saturday 20 September 2025 11:05:34 +0000 (0:00:00.182) 0:01:52.434 **** 2025-09-20 11:06:39.863172 | orchestrator | 2025-09-20 11:06:39.863182 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-20 11:06:39.863192 | orchestrator | Saturday 20 September 2025 11:05:34 +0000 (0:00:00.058) 0:01:52.493 **** 2025-09-20 11:06:39.863201 | orchestrator | 2025-09-20 11:06:39.863211 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-20 11:06:39.863221 | orchestrator | Saturday 20 September 2025 11:05:34 +0000 (0:00:00.059) 0:01:52.552 **** 2025-09-20 11:06:39.863230 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863240 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:39.863250 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:39.863260 | orchestrator | 2025-09-20 11:06:39.863269 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-20 11:06:39.863279 | orchestrator | Saturday 20 September 2025 11:05:47 +0000 (0:00:12.740) 0:02:05.292 **** 2025-09-20 11:06:39.863289 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863299 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:39.863308 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:39.863318 | orchestrator | 2025-09-20 11:06:39.863328 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-20 11:06:39.863337 | orchestrator | Saturday 20 September 2025 11:05:59 +0000 (0:00:12.636) 0:02:17.929 **** 2025-09-20 11:06:39.863347 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:39.863357 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:39.863367 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863376 | orchestrator | 2025-09-20 11:06:39.863386 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-20 11:06:39.863396 | orchestrator | Saturday 20 September 2025 11:06:10 +0000 (0:00:11.217) 0:02:29.146 **** 2025-09-20 11:06:39.863406 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863416 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:39.863425 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:39.863435 | orchestrator | 2025-09-20 11:06:39.863445 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-20 11:06:39.863454 | orchestrator | Saturday 20 September 2025 11:06:17 +0000 (0:00:06.320) 0:02:35.467 **** 2025-09-20 11:06:39.863464 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:39.863474 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:39.863483 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863493 | orchestrator | 2025-09-20 11:06:39.863509 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-20 11:06:39.863519 | orchestrator | Saturday 20 September 2025 11:06:25 +0000 (0:00:08.665) 0:02:44.133 **** 2025-09-20 11:06:39.863529 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863539 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:06:39.863549 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:06:39.863558 | orchestrator | 2025-09-20 11:06:39.863568 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-20 11:06:39.863578 | orchestrator | Saturday 20 September 2025 11:06:32 +0000 (0:00:06.225) 0:02:50.359 **** 2025-09-20 11:06:39.863587 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:06:39.863597 | orchestrator | 2025-09-20 11:06:39.863607 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:06:39.863617 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 11:06:39.863628 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 11:06:39.863638 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 11:06:39.863648 | orchestrator | 2025-09-20 11:06:39.863658 | orchestrator | 2025-09-20 11:06:39.863673 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:06:39.863683 | orchestrator | Saturday 20 September 2025 11:06:38 +0000 (0:00:06.684) 0:02:57.043 **** 2025-09-20 11:06:39.863693 | orchestrator | =============================================================================== 2025-09-20 11:06:39.863703 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.31s 2025-09-20 11:06:39.863713 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.34s 2025-09-20 11:06:39.863722 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.74s 2025-09-20 11:06:39.863736 | orchestrator | designate : Restart designate-api container ---------------------------- 12.64s 2025-09-20 11:06:39.863749 | orchestrator | designate : Restart designate-central container ------------------------ 11.22s 2025-09-20 11:06:39.863765 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.67s 2025-09-20 11:06:39.863782 | orchestrator | designate : Copying over config.json files for services ----------------- 8.51s 2025-09-20 11:06:39.863798 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.24s 2025-09-20 11:06:39.863814 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.68s 2025-09-20 11:06:39.863832 | orchestrator | designate : Restart designate-producer container ------------------------ 6.32s 2025-09-20 11:06:39.863849 | orchestrator | designate : Restart designate-worker container -------------------------- 6.23s 2025-09-20 11:06:39.863865 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.89s 2025-09-20 11:06:39.863877 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.53s 2025-09-20 11:06:39.863887 | orchestrator | designate : Check designate containers ---------------------------------- 4.38s 2025-09-20 11:06:39.863896 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.27s 2025-09-20 11:06:39.863906 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.60s 2025-09-20 11:06:39.863916 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.39s 2025-09-20 11:06:39.863925 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.22s 2025-09-20 11:06:39.863935 | orchestrator | service-ks-register : designate | Creating projects --------------------- 2.98s 2025-09-20 11:06:39.863945 | orchestrator | designate : Copying over named.conf ------------------------------------- 2.78s 2025-09-20 11:06:39.863955 | orchestrator | 2025-09-20 11:06:39 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:39.863972 | orchestrator | 2025-09-20 11:06:39 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:39.863982 | orchestrator | 2025-09-20 11:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:42.904674 | orchestrator | 2025-09-20 11:06:42 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:42.908031 | orchestrator | 2025-09-20 11:06:42 | INFO  | Task 3e223b2b-41d0-4b7d-bb86-910a901bb919 is in state STARTED 2025-09-20 11:06:42.909952 | orchestrator | 2025-09-20 11:06:42 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:42.912241 | orchestrator | 2025-09-20 11:06:42 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:42.912271 | orchestrator | 2025-09-20 11:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:45.941997 | orchestrator | 2025-09-20 11:06:45 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:45.942633 | orchestrator | 2025-09-20 11:06:45 | INFO  | Task 3e223b2b-41d0-4b7d-bb86-910a901bb919 is in state STARTED 2025-09-20 11:06:45.943382 | orchestrator | 2025-09-20 11:06:45 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:45.944114 | orchestrator | 2025-09-20 11:06:45 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:45.944143 | orchestrator | 2025-09-20 11:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:48.968495 | orchestrator | 2025-09-20 11:06:48 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:06:48.968586 | orchestrator | 2025-09-20 11:06:48 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:48.969343 | orchestrator | 2025-09-20 11:06:48 | INFO  | Task 3e223b2b-41d0-4b7d-bb86-910a901bb919 is in state SUCCESS 2025-09-20 11:06:48.969847 | orchestrator | 2025-09-20 11:06:48 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:48.970493 | orchestrator | 2025-09-20 11:06:48 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:48.970528 | orchestrator | 2025-09-20 11:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:52.014416 | orchestrator | 2025-09-20 11:06:52 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:06:52.018257 | orchestrator | 2025-09-20 11:06:52 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:52.022988 | orchestrator | 2025-09-20 11:06:52 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:52.031330 | orchestrator | 2025-09-20 11:06:52 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:52.031955 | orchestrator | 2025-09-20 11:06:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:55.072555 | orchestrator | 2025-09-20 11:06:55 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:06:55.072681 | orchestrator | 2025-09-20 11:06:55 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:55.075805 | orchestrator | 2025-09-20 11:06:55 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:55.080043 | orchestrator | 2025-09-20 11:06:55 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:55.080163 | orchestrator | 2025-09-20 11:06:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:06:58.119939 | orchestrator | 2025-09-20 11:06:58 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:06:58.121562 | orchestrator | 2025-09-20 11:06:58 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:06:58.124578 | orchestrator | 2025-09-20 11:06:58 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:06:58.127456 | orchestrator | 2025-09-20 11:06:58 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:06:58.127558 | orchestrator | 2025-09-20 11:06:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:01.167552 | orchestrator | 2025-09-20 11:07:01 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:01.168689 | orchestrator | 2025-09-20 11:07:01 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:01.170677 | orchestrator | 2025-09-20 11:07:01 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:01.172646 | orchestrator | 2025-09-20 11:07:01 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:01.172695 | orchestrator | 2025-09-20 11:07:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:04.215181 | orchestrator | 2025-09-20 11:07:04 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:04.216329 | orchestrator | 2025-09-20 11:07:04 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:04.217531 | orchestrator | 2025-09-20 11:07:04 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:04.219010 | orchestrator | 2025-09-20 11:07:04 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:04.219036 | orchestrator | 2025-09-20 11:07:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:07.256051 | orchestrator | 2025-09-20 11:07:07 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:07.256762 | orchestrator | 2025-09-20 11:07:07 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:07.258554 | orchestrator | 2025-09-20 11:07:07 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:07.259963 | orchestrator | 2025-09-20 11:07:07 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:07.259999 | orchestrator | 2025-09-20 11:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:10.307749 | orchestrator | 2025-09-20 11:07:10 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:10.309179 | orchestrator | 2025-09-20 11:07:10 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:10.310272 | orchestrator | 2025-09-20 11:07:10 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:10.311709 | orchestrator | 2025-09-20 11:07:10 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:10.311755 | orchestrator | 2025-09-20 11:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:13.350215 | orchestrator | 2025-09-20 11:07:13 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:13.350375 | orchestrator | 2025-09-20 11:07:13 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:13.352771 | orchestrator | 2025-09-20 11:07:13 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:13.352856 | orchestrator | 2025-09-20 11:07:13 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:13.352897 | orchestrator | 2025-09-20 11:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:16.411668 | orchestrator | 2025-09-20 11:07:16 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:16.413726 | orchestrator | 2025-09-20 11:07:16 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:16.414846 | orchestrator | 2025-09-20 11:07:16 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:16.418419 | orchestrator | 2025-09-20 11:07:16 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:16.418494 | orchestrator | 2025-09-20 11:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:19.465841 | orchestrator | 2025-09-20 11:07:19 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:19.468555 | orchestrator | 2025-09-20 11:07:19 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:19.470743 | orchestrator | 2025-09-20 11:07:19 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:19.472965 | orchestrator | 2025-09-20 11:07:19 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:19.472991 | orchestrator | 2025-09-20 11:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:22.509466 | orchestrator | 2025-09-20 11:07:22 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:22.509807 | orchestrator | 2025-09-20 11:07:22 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:22.510667 | orchestrator | 2025-09-20 11:07:22 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:22.511772 | orchestrator | 2025-09-20 11:07:22 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:22.511807 | orchestrator | 2025-09-20 11:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:25.550391 | orchestrator | 2025-09-20 11:07:25 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:25.550805 | orchestrator | 2025-09-20 11:07:25 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:25.551664 | orchestrator | 2025-09-20 11:07:25 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:25.553036 | orchestrator | 2025-09-20 11:07:25 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:25.553147 | orchestrator | 2025-09-20 11:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:28.580348 | orchestrator | 2025-09-20 11:07:28 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:28.580775 | orchestrator | 2025-09-20 11:07:28 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:28.581504 | orchestrator | 2025-09-20 11:07:28 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:28.581958 | orchestrator | 2025-09-20 11:07:28 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:28.582008 | orchestrator | 2025-09-20 11:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:31.607362 | orchestrator | 2025-09-20 11:07:31 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:31.609962 | orchestrator | 2025-09-20 11:07:31 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:31.635221 | orchestrator | 2025-09-20 11:07:31 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:31.635311 | orchestrator | 2025-09-20 11:07:31 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:31.635326 | orchestrator | 2025-09-20 11:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:34.636422 | orchestrator | 2025-09-20 11:07:34 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:34.638553 | orchestrator | 2025-09-20 11:07:34 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:34.640829 | orchestrator | 2025-09-20 11:07:34 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:34.643341 | orchestrator | 2025-09-20 11:07:34 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:34.643394 | orchestrator | 2025-09-20 11:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:37.683837 | orchestrator | 2025-09-20 11:07:37 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:37.685555 | orchestrator | 2025-09-20 11:07:37 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:37.687012 | orchestrator | 2025-09-20 11:07:37 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:37.688659 | orchestrator | 2025-09-20 11:07:37 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:37.688867 | orchestrator | 2025-09-20 11:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:40.738293 | orchestrator | 2025-09-20 11:07:40 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:40.740362 | orchestrator | 2025-09-20 11:07:40 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:40.742638 | orchestrator | 2025-09-20 11:07:40 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:40.745464 | orchestrator | 2025-09-20 11:07:40 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:40.745503 | orchestrator | 2025-09-20 11:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:43.779119 | orchestrator | 2025-09-20 11:07:43 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:43.781523 | orchestrator | 2025-09-20 11:07:43 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:43.783559 | orchestrator | 2025-09-20 11:07:43 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:43.785405 | orchestrator | 2025-09-20 11:07:43 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:43.785450 | orchestrator | 2025-09-20 11:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:46.825648 | orchestrator | 2025-09-20 11:07:46 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:46.827306 | orchestrator | 2025-09-20 11:07:46 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:46.829107 | orchestrator | 2025-09-20 11:07:46 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:46.830751 | orchestrator | 2025-09-20 11:07:46 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:46.830807 | orchestrator | 2025-09-20 11:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:49.877971 | orchestrator | 2025-09-20 11:07:49 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:49.879601 | orchestrator | 2025-09-20 11:07:49 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:49.881293 | orchestrator | 2025-09-20 11:07:49 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:49.882556 | orchestrator | 2025-09-20 11:07:49 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:49.882576 | orchestrator | 2025-09-20 11:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:52.933678 | orchestrator | 2025-09-20 11:07:52 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:52.934172 | orchestrator | 2025-09-20 11:07:52 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:52.934911 | orchestrator | 2025-09-20 11:07:52 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:52.935723 | orchestrator | 2025-09-20 11:07:52 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:52.935759 | orchestrator | 2025-09-20 11:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:55.979734 | orchestrator | 2025-09-20 11:07:55 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:55.979848 | orchestrator | 2025-09-20 11:07:55 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:55.980628 | orchestrator | 2025-09-20 11:07:55 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:55.981194 | orchestrator | 2025-09-20 11:07:55 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:55.981219 | orchestrator | 2025-09-20 11:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:07:59.020497 | orchestrator | 2025-09-20 11:07:59 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:07:59.020620 | orchestrator | 2025-09-20 11:07:59 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:07:59.021022 | orchestrator | 2025-09-20 11:07:59 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:07:59.021617 | orchestrator | 2025-09-20 11:07:59 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:07:59.021649 | orchestrator | 2025-09-20 11:07:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:02.062352 | orchestrator | 2025-09-20 11:08:02 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:02.063709 | orchestrator | 2025-09-20 11:08:02 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:02.064595 | orchestrator | 2025-09-20 11:08:02 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:02.065660 | orchestrator | 2025-09-20 11:08:02 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:02.065685 | orchestrator | 2025-09-20 11:08:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:05.105496 | orchestrator | 2025-09-20 11:08:05 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:05.107069 | orchestrator | 2025-09-20 11:08:05 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:05.109782 | orchestrator | 2025-09-20 11:08:05 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:05.112434 | orchestrator | 2025-09-20 11:08:05 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:05.112526 | orchestrator | 2025-09-20 11:08:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:08.157206 | orchestrator | 2025-09-20 11:08:08 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:08.159129 | orchestrator | 2025-09-20 11:08:08 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:08.161214 | orchestrator | 2025-09-20 11:08:08 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:08.163516 | orchestrator | 2025-09-20 11:08:08 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:08.163558 | orchestrator | 2025-09-20 11:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:11.208491 | orchestrator | 2025-09-20 11:08:11 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:11.210199 | orchestrator | 2025-09-20 11:08:11 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:11.211622 | orchestrator | 2025-09-20 11:08:11 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:11.213454 | orchestrator | 2025-09-20 11:08:11 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:11.213499 | orchestrator | 2025-09-20 11:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:14.253586 | orchestrator | 2025-09-20 11:08:14 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:14.254435 | orchestrator | 2025-09-20 11:08:14 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:14.255089 | orchestrator | 2025-09-20 11:08:14 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:14.255591 | orchestrator | 2025-09-20 11:08:14 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:14.257462 | orchestrator | 2025-09-20 11:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:17.310172 | orchestrator | 2025-09-20 11:08:17 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:17.310394 | orchestrator | 2025-09-20 11:08:17 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:17.312074 | orchestrator | 2025-09-20 11:08:17 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:17.312581 | orchestrator | 2025-09-20 11:08:17 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:17.312590 | orchestrator | 2025-09-20 11:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:20.332111 | orchestrator | 2025-09-20 11:08:20 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:20.332385 | orchestrator | 2025-09-20 11:08:20 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:20.333812 | orchestrator | 2025-09-20 11:08:20 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:20.334223 | orchestrator | 2025-09-20 11:08:20 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:20.334248 | orchestrator | 2025-09-20 11:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:23.367062 | orchestrator | 2025-09-20 11:08:23 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:23.367216 | orchestrator | 2025-09-20 11:08:23 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:23.367856 | orchestrator | 2025-09-20 11:08:23 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:23.368696 | orchestrator | 2025-09-20 11:08:23 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:23.368744 | orchestrator | 2025-09-20 11:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:26.398835 | orchestrator | 2025-09-20 11:08:26 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:26.400708 | orchestrator | 2025-09-20 11:08:26 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:26.402946 | orchestrator | 2025-09-20 11:08:26 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:26.403580 | orchestrator | 2025-09-20 11:08:26 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:26.403598 | orchestrator | 2025-09-20 11:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:29.447289 | orchestrator | 2025-09-20 11:08:29 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:29.448681 | orchestrator | 2025-09-20 11:08:29 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:29.449636 | orchestrator | 2025-09-20 11:08:29 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:29.451482 | orchestrator | 2025-09-20 11:08:29 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:29.451524 | orchestrator | 2025-09-20 11:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:32.490616 | orchestrator | 2025-09-20 11:08:32 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:32.491701 | orchestrator | 2025-09-20 11:08:32 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:32.493617 | orchestrator | 2025-09-20 11:08:32 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state STARTED 2025-09-20 11:08:32.495508 | orchestrator | 2025-09-20 11:08:32 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:32.495538 | orchestrator | 2025-09-20 11:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:35.536547 | orchestrator | 2025-09-20 11:08:35 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:35.538306 | orchestrator | 2025-09-20 11:08:35 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:35.541092 | orchestrator | 2025-09-20 11:08:35 | INFO  | Task 3d77256c-1d3e-492e-af9f-f43db864fb5a is in state SUCCESS 2025-09-20 11:08:35.542947 | orchestrator | 2025-09-20 11:08:35.542980 | orchestrator | 2025-09-20 11:08:35.542985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:08:35.542991 | orchestrator | 2025-09-20 11:08:35.542995 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:08:35.543000 | orchestrator | Saturday 20 September 2025 11:06:43 +0000 (0:00:00.178) 0:00:00.178 **** 2025-09-20 11:08:35.543004 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:35.543010 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:35.543013 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:35.543017 | orchestrator | 2025-09-20 11:08:35.543037 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:08:35.543042 | orchestrator | Saturday 20 September 2025 11:06:43 +0000 (0:00:00.335) 0:00:00.514 **** 2025-09-20 11:08:35.543046 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-20 11:08:35.543051 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-20 11:08:35.543055 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-20 11:08:35.543075 | orchestrator | 2025-09-20 11:08:35.543079 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-20 11:08:35.543083 | orchestrator | 2025-09-20 11:08:35.543087 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-20 11:08:35.543091 | orchestrator | Saturday 20 September 2025 11:06:44 +0000 (0:00:00.916) 0:00:01.431 **** 2025-09-20 11:08:35.543095 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:35.543099 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:35.543103 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:35.543107 | orchestrator | 2025-09-20 11:08:35.543111 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:08:35.543135 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:08:35.543141 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:08:35.543145 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:08:35.543149 | orchestrator | 2025-09-20 11:08:35.543153 | orchestrator | 2025-09-20 11:08:35.543157 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:08:35.543160 | orchestrator | Saturday 20 September 2025 11:06:45 +0000 (0:00:00.887) 0:00:02.318 **** 2025-09-20 11:08:35.543164 | orchestrator | =============================================================================== 2025-09-20 11:08:35.543173 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-09-20 11:08:35.543177 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.89s 2025-09-20 11:08:35.543181 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-20 11:08:35.543185 | orchestrator | 2025-09-20 11:08:35.543189 | orchestrator | 2025-09-20 11:08:35.543193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:08:35.543197 | orchestrator | 2025-09-20 11:08:35.543201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:08:35.543205 | orchestrator | Saturday 20 September 2025 11:06:38 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-20 11:08:35.543208 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:35.543212 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:35.543216 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:35.543220 | orchestrator | 2025-09-20 11:08:35.543224 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:08:35.543227 | orchestrator | Saturday 20 September 2025 11:06:38 +0000 (0:00:00.340) 0:00:00.617 **** 2025-09-20 11:08:35.543231 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-20 11:08:35.543235 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-20 11:08:35.543239 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-20 11:08:35.543243 | orchestrator | 2025-09-20 11:08:35.543246 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-20 11:08:35.543250 | orchestrator | 2025-09-20 11:08:35.543254 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-20 11:08:35.543258 | orchestrator | Saturday 20 September 2025 11:06:39 +0000 (0:00:00.447) 0:00:01.065 **** 2025-09-20 11:08:35.543262 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:35.543266 | orchestrator | 2025-09-20 11:08:35.543269 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-20 11:08:35.543273 | orchestrator | Saturday 20 September 2025 11:06:39 +0000 (0:00:00.551) 0:00:01.616 **** 2025-09-20 11:08:35.543278 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-20 11:08:35.543282 | orchestrator | 2025-09-20 11:08:35.543285 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-20 11:08:35.543293 | orchestrator | Saturday 20 September 2025 11:06:43 +0000 (0:00:03.250) 0:00:04.867 **** 2025-09-20 11:08:35.543297 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-20 11:08:35.543301 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-20 11:08:35.543305 | orchestrator | 2025-09-20 11:08:35.543309 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-20 11:08:35.543312 | orchestrator | Saturday 20 September 2025 11:06:48 +0000 (0:00:05.684) 0:00:10.551 **** 2025-09-20 11:08:35.543316 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:08:35.543320 | orchestrator | 2025-09-20 11:08:35.543324 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-20 11:08:35.543328 | orchestrator | Saturday 20 September 2025 11:06:51 +0000 (0:00:02.949) 0:00:13.500 **** 2025-09-20 11:08:35.543340 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:08:35.543344 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-20 11:08:35.543348 | orchestrator | 2025-09-20 11:08:35.543352 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-20 11:08:35.543356 | orchestrator | Saturday 20 September 2025 11:06:55 +0000 (0:00:03.676) 0:00:17.176 **** 2025-09-20 11:08:35.543360 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:08:35.543364 | orchestrator | 2025-09-20 11:08:35.543368 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-20 11:08:35.543372 | orchestrator | Saturday 20 September 2025 11:06:58 +0000 (0:00:03.092) 0:00:20.269 **** 2025-09-20 11:08:35.543375 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-20 11:08:35.543379 | orchestrator | 2025-09-20 11:08:35.543383 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-20 11:08:35.543387 | orchestrator | Saturday 20 September 2025 11:07:02 +0000 (0:00:04.100) 0:00:24.369 **** 2025-09-20 11:08:35.543391 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.543395 | orchestrator | 2025-09-20 11:08:35.543398 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-20 11:08:35.543402 | orchestrator | Saturday 20 September 2025 11:07:05 +0000 (0:00:02.952) 0:00:27.322 **** 2025-09-20 11:08:35.543406 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.543410 | orchestrator | 2025-09-20 11:08:35.543414 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-20 11:08:35.543418 | orchestrator | Saturday 20 September 2025 11:07:09 +0000 (0:00:03.662) 0:00:30.985 **** 2025-09-20 11:08:35.543424 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.543428 | orchestrator | 2025-09-20 11:08:35.543431 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-20 11:08:35.543435 | orchestrator | Saturday 20 September 2025 11:07:12 +0000 (0:00:03.494) 0:00:34.479 **** 2025-09-20 11:08:35.543441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543480 | orchestrator | 2025-09-20 11:08:35.543487 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-20 11:08:35.543490 | orchestrator | Saturday 20 September 2025 11:07:14 +0000 (0:00:01.350) 0:00:35.830 **** 2025-09-20 11:08:35.543494 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:35.543498 | orchestrator | 2025-09-20 11:08:35.543502 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-20 11:08:35.543506 | orchestrator | Saturday 20 September 2025 11:07:14 +0000 (0:00:00.127) 0:00:35.957 **** 2025-09-20 11:08:35.543509 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:35.543513 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:35.543517 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:35.543521 | orchestrator | 2025-09-20 11:08:35.543525 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-20 11:08:35.543528 | orchestrator | Saturday 20 September 2025 11:07:14 +0000 (0:00:00.485) 0:00:36.442 **** 2025-09-20 11:08:35.543532 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:08:35.543536 | orchestrator | 2025-09-20 11:08:35.543540 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-20 11:08:35.543543 | orchestrator | Saturday 20 September 2025 11:07:15 +0000 (0:00:00.864) 0:00:37.307 **** 2025-09-20 11:08:35.543548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543586 | orchestrator | 2025-09-20 11:08:35.543630 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-20 11:08:35.543635 | orchestrator | Saturday 20 September 2025 11:07:17 +0000 (0:00:02.334) 0:00:39.641 **** 2025-09-20 11:08:35.543640 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:35.543644 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:35.543649 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:35.543653 | orchestrator | 2025-09-20 11:08:35.543658 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-20 11:08:35.543665 | orchestrator | Saturday 20 September 2025 11:07:18 +0000 (0:00:00.298) 0:00:39.939 **** 2025-09-20 11:08:35.543670 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:35.543674 | orchestrator | 2025-09-20 11:08:35.543679 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-20 11:08:35.543683 | orchestrator | Saturday 20 September 2025 11:07:18 +0000 (0:00:00.588) 0:00:40.527 **** 2025-09-20 11:08:35.543690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.543709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.543732 | orchestrator | 2025-09-20 11:08:35.543736 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-20 11:08:35.543740 | orchestrator | Saturday 20 September 2025 11:07:21 +0000 (0:00:02.285) 0:00:42.813 **** 2025-09-20 11:08:35.543745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.543750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.543754 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:35.543759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.543768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.543772 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:35.543780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.543791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.543795 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:35.543800 | orchestrator | 2025-09-20 11:08:35.543804 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-20 11:08:35.543809 | orchestrator | Saturday 20 September 2025 11:07:21 +0000 (0:00:00.723) 0:00:43.536 **** 2025-09-20 11:08:35.543813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.543818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.543823 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:35.543830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.543840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.543845 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:35.543850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.543854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.543859 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:35.543863 | orchestrator | 2025-09-20 11:08:35.543868 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-20 11:08:35.543872 | orchestrator | Saturday 20 September 2025 11:07:22 +0000 (0:00:00.894) 0:00:44.431 **** 2025-09-20 11:08:35.544015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544093 | orchestrator | 2025-09-20 11:08:35.544097 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-20 11:08:35.544101 | orchestrator | Saturday 20 September 2025 11:07:25 +0000 (0:00:02.404) 0:00:46.835 **** 2025-09-20 11:08:35.544108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544142 | orchestrator | 2025-09-20 11:08:35.544146 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-20 11:08:35.544149 | orchestrator | Saturday 20 September 2025 11:07:30 +0000 (0:00:05.264) 0:00:52.099 **** 2025-09-20 11:08:35.544153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.544157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.544161 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:35.544165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.544176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.544180 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:35.544186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 11:08:35.544190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:35.544194 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:35.544198 | orchestrator | 2025-09-20 11:08:35.544202 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-20 11:08:35.544206 | orchestrator | Saturday 20 September 2025 11:07:31 +0000 (0:00:00.936) 0:00:53.036 **** 2025-09-20 11:08:35.544210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 11:08:35.544231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:35.544246 | orchestrator | 2025-09-20 11:08:35.544249 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-20 11:08:35.544253 | orchestrator | Saturday 20 September 2025 11:07:34 +0000 (0:00:03.133) 0:00:56.170 **** 2025-09-20 11:08:35.544257 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:35.544261 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:35.544265 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:35.544269 | orchestrator | 2025-09-20 11:08:35.544273 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-20 11:08:35.544276 | orchestrator | Saturday 20 September 2025 11:07:34 +0000 (0:00:00.270) 0:00:56.440 **** 2025-09-20 11:08:35.544280 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.544284 | orchestrator | 2025-09-20 11:08:35.544288 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-20 11:08:35.544292 | orchestrator | Saturday 20 September 2025 11:07:36 +0000 (0:00:02.128) 0:00:58.569 **** 2025-09-20 11:08:35.544295 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.544299 | orchestrator | 2025-09-20 11:08:35.544303 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-20 11:08:35.544307 | orchestrator | Saturday 20 September 2025 11:07:38 +0000 (0:00:02.081) 0:01:00.651 **** 2025-09-20 11:08:35.544313 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.544317 | orchestrator | 2025-09-20 11:08:35.544320 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-20 11:08:35.544324 | orchestrator | Saturday 20 September 2025 11:07:57 +0000 (0:00:18.430) 0:01:19.081 **** 2025-09-20 11:08:35.544328 | orchestrator | 2025-09-20 11:08:35.544332 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-20 11:08:35.544336 | orchestrator | Saturday 20 September 2025 11:07:57 +0000 (0:00:00.060) 0:01:19.141 **** 2025-09-20 11:08:35.544339 | orchestrator | 2025-09-20 11:08:35.544343 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-20 11:08:35.544347 | orchestrator | Saturday 20 September 2025 11:07:57 +0000 (0:00:00.057) 0:01:19.198 **** 2025-09-20 11:08:35.544351 | orchestrator | 2025-09-20 11:08:35.544355 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-20 11:08:35.544371 | orchestrator | Saturday 20 September 2025 11:07:57 +0000 (0:00:00.060) 0:01:19.259 **** 2025-09-20 11:08:35.544375 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.544379 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:35.544383 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:35.544387 | orchestrator | 2025-09-20 11:08:35.544390 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-20 11:08:35.544394 | orchestrator | Saturday 20 September 2025 11:08:13 +0000 (0:00:16.016) 0:01:35.275 **** 2025-09-20 11:08:35.544398 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:35.544402 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:35.544406 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:35.544409 | orchestrator | 2025-09-20 11:08:35.544415 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:08:35.544420 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 11:08:35.544424 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 11:08:35.544428 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 11:08:35.544431 | orchestrator | 2025-09-20 11:08:35.544435 | orchestrator | 2025-09-20 11:08:35.544439 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:08:35.544443 | orchestrator | Saturday 20 September 2025 11:08:33 +0000 (0:00:19.518) 0:01:54.794 **** 2025-09-20 11:08:35.544452 | orchestrator | =============================================================================== 2025-09-20 11:08:35.544456 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 19.52s 2025-09-20 11:08:35.544459 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.43s 2025-09-20 11:08:35.544463 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.02s 2025-09-20 11:08:35.544467 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.68s 2025-09-20 11:08:35.544471 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.26s 2025-09-20 11:08:35.544475 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.10s 2025-09-20 11:08:35.544478 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.68s 2025-09-20 11:08:35.544482 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.66s 2025-09-20 11:08:35.544486 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.49s 2025-09-20 11:08:35.544490 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.25s 2025-09-20 11:08:35.544493 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.13s 2025-09-20 11:08:35.544497 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.09s 2025-09-20 11:08:35.544501 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.95s 2025-09-20 11:08:35.544505 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.95s 2025-09-20 11:08:35.544508 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.40s 2025-09-20 11:08:35.544512 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.33s 2025-09-20 11:08:35.544516 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.29s 2025-09-20 11:08:35.544519 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.13s 2025-09-20 11:08:35.544523 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.08s 2025-09-20 11:08:35.544527 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.35s 2025-09-20 11:08:35.544531 | orchestrator | 2025-09-20 11:08:35 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:35.544535 | orchestrator | 2025-09-20 11:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:38.585972 | orchestrator | 2025-09-20 11:08:38 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:38.587635 | orchestrator | 2025-09-20 11:08:38 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:38.590195 | orchestrator | 2025-09-20 11:08:38 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:38.590271 | orchestrator | 2025-09-20 11:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:41.645947 | orchestrator | 2025-09-20 11:08:41 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:41.648682 | orchestrator | 2025-09-20 11:08:41 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:41.650374 | orchestrator | 2025-09-20 11:08:41 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:41.650405 | orchestrator | 2025-09-20 11:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:44.689944 | orchestrator | 2025-09-20 11:08:44 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:44.692000 | orchestrator | 2025-09-20 11:08:44 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:44.693721 | orchestrator | 2025-09-20 11:08:44 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:44.693791 | orchestrator | 2025-09-20 11:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:47.737678 | orchestrator | 2025-09-20 11:08:47 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:47.738607 | orchestrator | 2025-09-20 11:08:47 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:47.739883 | orchestrator | 2025-09-20 11:08:47 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:47.740040 | orchestrator | 2025-09-20 11:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:50.782930 | orchestrator | 2025-09-20 11:08:50 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:50.783245 | orchestrator | 2025-09-20 11:08:50 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:50.784487 | orchestrator | 2025-09-20 11:08:50 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:50.784533 | orchestrator | 2025-09-20 11:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:53.814708 | orchestrator | 2025-09-20 11:08:53 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:53.816906 | orchestrator | 2025-09-20 11:08:53 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:53.818978 | orchestrator | 2025-09-20 11:08:53 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:53.819145 | orchestrator | 2025-09-20 11:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:56.862652 | orchestrator | 2025-09-20 11:08:56 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:56.863817 | orchestrator | 2025-09-20 11:08:56 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state STARTED 2025-09-20 11:08:56.865422 | orchestrator | 2025-09-20 11:08:56 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state STARTED 2025-09-20 11:08:56.865493 | orchestrator | 2025-09-20 11:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:08:59.919095 | orchestrator | 2025-09-20 11:08:59 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:08:59.923746 | orchestrator | 2025-09-20 11:08:59 | INFO  | Task 7c8ed203-dbb1-4350-9189-af0b35b98c8d is in state SUCCESS 2025-09-20 11:08:59.924176 | orchestrator | 2025-09-20 11:08:59.926742 | orchestrator | 2025-09-20 11:08:59.926775 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:08:59.926789 | orchestrator | 2025-09-20 11:08:59.926800 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-20 11:08:59.926812 | orchestrator | Saturday 20 September 2025 11:00:08 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-20 11:08:59.926823 | orchestrator | changed: [testbed-manager] 2025-09-20 11:08:59.926836 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.926847 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.926857 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.926868 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.926879 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.926890 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.926901 | orchestrator | 2025-09-20 11:08:59.926913 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:08:59.927047 | orchestrator | Saturday 20 September 2025 11:00:09 +0000 (0:00:00.807) 0:00:01.067 **** 2025-09-20 11:08:59.927059 | orchestrator | changed: [testbed-manager] 2025-09-20 11:08:59.927070 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.927081 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.927180 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.927192 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.927203 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.927214 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.927225 | orchestrator | 2025-09-20 11:08:59.927236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:08:59.927247 | orchestrator | Saturday 20 September 2025 11:00:09 +0000 (0:00:00.621) 0:00:01.689 **** 2025-09-20 11:08:59.927258 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-20 11:08:59.927270 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-20 11:08:59.927281 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-20 11:08:59.927292 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-20 11:08:59.927302 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-20 11:08:59.927313 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-20 11:08:59.927324 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-20 11:08:59.927335 | orchestrator | 2025-09-20 11:08:59.927346 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-20 11:08:59.927359 | orchestrator | 2025-09-20 11:08:59.927372 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-20 11:08:59.927384 | orchestrator | Saturday 20 September 2025 11:00:10 +0000 (0:00:00.806) 0:00:02.496 **** 2025-09-20 11:08:59.927397 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.927409 | orchestrator | 2025-09-20 11:08:59.927422 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-20 11:08:59.927434 | orchestrator | Saturday 20 September 2025 11:00:11 +0000 (0:00:00.724) 0:00:03.220 **** 2025-09-20 11:08:59.927447 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-20 11:08:59.927460 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-20 11:08:59.927473 | orchestrator | 2025-09-20 11:08:59.927500 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-20 11:08:59.927513 | orchestrator | Saturday 20 September 2025 11:00:14 +0000 (0:00:03.256) 0:00:06.477 **** 2025-09-20 11:08:59.927525 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 11:08:59.927538 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 11:08:59.927550 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.927563 | orchestrator | 2025-09-20 11:08:59.927575 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-20 11:08:59.927615 | orchestrator | Saturday 20 September 2025 11:00:17 +0000 (0:00:03.516) 0:00:09.993 **** 2025-09-20 11:08:59.927628 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.927641 | orchestrator | 2025-09-20 11:08:59.927653 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-20 11:08:59.927665 | orchestrator | Saturday 20 September 2025 11:00:18 +0000 (0:00:00.638) 0:00:10.632 **** 2025-09-20 11:08:59.927677 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.927689 | orchestrator | 2025-09-20 11:08:59.927702 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-20 11:08:59.927714 | orchestrator | Saturday 20 September 2025 11:00:20 +0000 (0:00:02.023) 0:00:12.655 **** 2025-09-20 11:08:59.927726 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.927736 | orchestrator | 2025-09-20 11:08:59.927748 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 11:08:59.927785 | orchestrator | Saturday 20 September 2025 11:00:24 +0000 (0:00:04.202) 0:00:16.858 **** 2025-09-20 11:08:59.927797 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.927808 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.927819 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.927830 | orchestrator | 2025-09-20 11:08:59.927841 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-20 11:08:59.927867 | orchestrator | Saturday 20 September 2025 11:00:25 +0000 (0:00:00.329) 0:00:17.188 **** 2025-09-20 11:08:59.927879 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.927890 | orchestrator | 2025-09-20 11:08:59.927900 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-20 11:08:59.927911 | orchestrator | Saturday 20 September 2025 11:00:49 +0000 (0:00:24.523) 0:00:41.711 **** 2025-09-20 11:08:59.927922 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.927933 | orchestrator | 2025-09-20 11:08:59.927944 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-20 11:08:59.927955 | orchestrator | Saturday 20 September 2025 11:01:01 +0000 (0:00:12.085) 0:00:53.797 **** 2025-09-20 11:08:59.927966 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.927977 | orchestrator | 2025-09-20 11:08:59.927988 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-20 11:08:59.927999 | orchestrator | Saturday 20 September 2025 11:01:12 +0000 (0:00:10.541) 0:01:04.338 **** 2025-09-20 11:08:59.928045 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.928058 | orchestrator | 2025-09-20 11:08:59.928069 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-20 11:08:59.928081 | orchestrator | Saturday 20 September 2025 11:01:14 +0000 (0:00:01.859) 0:01:06.198 **** 2025-09-20 11:08:59.928091 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.928102 | orchestrator | 2025-09-20 11:08:59.928113 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 11:08:59.928124 | orchestrator | Saturday 20 September 2025 11:01:14 +0000 (0:00:00.671) 0:01:06.869 **** 2025-09-20 11:08:59.928135 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.928146 | orchestrator | 2025-09-20 11:08:59.928157 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-20 11:08:59.928168 | orchestrator | Saturday 20 September 2025 11:01:15 +0000 (0:00:00.907) 0:01:07.777 **** 2025-09-20 11:08:59.928179 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.928189 | orchestrator | 2025-09-20 11:08:59.928200 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-20 11:08:59.928211 | orchestrator | Saturday 20 September 2025 11:01:31 +0000 (0:00:15.801) 0:01:23.578 **** 2025-09-20 11:08:59.928222 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.928233 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928243 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928254 | orchestrator | 2025-09-20 11:08:59.928265 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-20 11:08:59.928276 | orchestrator | 2025-09-20 11:08:59.928287 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-20 11:08:59.928297 | orchestrator | Saturday 20 September 2025 11:01:32 +0000 (0:00:00.588) 0:01:24.167 **** 2025-09-20 11:08:59.928308 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.928319 | orchestrator | 2025-09-20 11:08:59.928330 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-20 11:08:59.928341 | orchestrator | Saturday 20 September 2025 11:01:33 +0000 (0:00:01.309) 0:01:25.477 **** 2025-09-20 11:08:59.928351 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928362 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928373 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.928384 | orchestrator | 2025-09-20 11:08:59.928395 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-20 11:08:59.928406 | orchestrator | Saturday 20 September 2025 11:01:35 +0000 (0:00:02.486) 0:01:27.963 **** 2025-09-20 11:08:59.928417 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928427 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928438 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.928449 | orchestrator | 2025-09-20 11:08:59.928460 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-20 11:08:59.928484 | orchestrator | Saturday 20 September 2025 11:01:38 +0000 (0:00:02.218) 0:01:30.181 **** 2025-09-20 11:08:59.928495 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.928506 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928516 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928527 | orchestrator | 2025-09-20 11:08:59.928543 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-20 11:08:59.928555 | orchestrator | Saturday 20 September 2025 11:01:38 +0000 (0:00:00.383) 0:01:30.565 **** 2025-09-20 11:08:59.928566 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 11:08:59.928576 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928587 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 11:08:59.928598 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928608 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-20 11:08:59.928619 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-20 11:08:59.928630 | orchestrator | 2025-09-20 11:08:59.928641 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-20 11:08:59.928652 | orchestrator | Saturday 20 September 2025 11:01:46 +0000 (0:00:08.120) 0:01:38.686 **** 2025-09-20 11:08:59.928663 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.928673 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928684 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928695 | orchestrator | 2025-09-20 11:08:59.928706 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-20 11:08:59.928716 | orchestrator | Saturday 20 September 2025 11:01:46 +0000 (0:00:00.323) 0:01:39.009 **** 2025-09-20 11:08:59.928727 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-20 11:08:59.928738 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.928749 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 11:08:59.928760 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928770 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 11:08:59.928781 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928792 | orchestrator | 2025-09-20 11:08:59.928803 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-20 11:08:59.928814 | orchestrator | Saturday 20 September 2025 11:01:47 +0000 (0:00:00.601) 0:01:39.610 **** 2025-09-20 11:08:59.928824 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928835 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928846 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.928856 | orchestrator | 2025-09-20 11:08:59.928867 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-20 11:08:59.928878 | orchestrator | Saturday 20 September 2025 11:01:48 +0000 (0:00:00.470) 0:01:40.081 **** 2025-09-20 11:08:59.928889 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928899 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928910 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.928921 | orchestrator | 2025-09-20 11:08:59.928932 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-20 11:08:59.928943 | orchestrator | Saturday 20 September 2025 11:01:49 +0000 (0:00:00.957) 0:01:41.038 **** 2025-09-20 11:08:59.928954 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.928965 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.928982 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.928993 | orchestrator | 2025-09-20 11:08:59.929004 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-20 11:08:59.929044 | orchestrator | Saturday 20 September 2025 11:01:51 +0000 (0:00:02.563) 0:01:43.601 **** 2025-09-20 11:08:59.929055 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.929066 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.929077 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.929088 | orchestrator | 2025-09-20 11:08:59.929099 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-20 11:08:59.929118 | orchestrator | Saturday 20 September 2025 11:02:11 +0000 (0:00:20.219) 0:02:03.821 **** 2025-09-20 11:08:59.929129 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.929140 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.929151 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.929162 | orchestrator | 2025-09-20 11:08:59.929172 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-20 11:08:59.929183 | orchestrator | Saturday 20 September 2025 11:02:23 +0000 (0:00:11.213) 0:02:15.035 **** 2025-09-20 11:08:59.929194 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.929205 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.929215 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.929226 | orchestrator | 2025-09-20 11:08:59.929237 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-20 11:08:59.929248 | orchestrator | Saturday 20 September 2025 11:02:24 +0000 (0:00:01.099) 0:02:16.135 **** 2025-09-20 11:08:59.929259 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.929270 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.929280 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.929291 | orchestrator | 2025-09-20 11:08:59.929302 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-20 11:08:59.929313 | orchestrator | Saturday 20 September 2025 11:02:34 +0000 (0:00:10.183) 0:02:26.318 **** 2025-09-20 11:08:59.929323 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.929334 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.929345 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.929355 | orchestrator | 2025-09-20 11:08:59.929366 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-20 11:08:59.929377 | orchestrator | Saturday 20 September 2025 11:02:35 +0000 (0:00:00.959) 0:02:27.278 **** 2025-09-20 11:08:59.929388 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.929399 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.929410 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.929420 | orchestrator | 2025-09-20 11:08:59.929431 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-20 11:08:59.929442 | orchestrator | 2025-09-20 11:08:59.929453 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 11:08:59.929464 | orchestrator | Saturday 20 September 2025 11:02:35 +0000 (0:00:00.421) 0:02:27.699 **** 2025-09-20 11:08:59.929475 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.929486 | orchestrator | 2025-09-20 11:08:59.929497 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-20 11:08:59.929508 | orchestrator | Saturday 20 September 2025 11:02:36 +0000 (0:00:00.472) 0:02:28.172 **** 2025-09-20 11:08:59.929524 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-20 11:08:59.929535 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-20 11:08:59.929545 | orchestrator | 2025-09-20 11:08:59.929556 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-20 11:08:59.929567 | orchestrator | Saturday 20 September 2025 11:02:39 +0000 (0:00:02.992) 0:02:31.165 **** 2025-09-20 11:08:59.929578 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-20 11:08:59.929591 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-20 11:08:59.929602 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-20 11:08:59.929613 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-20 11:08:59.929624 | orchestrator | 2025-09-20 11:08:59.929635 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-20 11:08:59.929652 | orchestrator | Saturday 20 September 2025 11:02:45 +0000 (0:00:05.995) 0:02:37.160 **** 2025-09-20 11:08:59.929664 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:08:59.929675 | orchestrator | 2025-09-20 11:08:59.929685 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-20 11:08:59.929696 | orchestrator | Saturday 20 September 2025 11:02:48 +0000 (0:00:03.011) 0:02:40.172 **** 2025-09-20 11:08:59.929707 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:08:59.929718 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-20 11:08:59.929729 | orchestrator | 2025-09-20 11:08:59.929740 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-20 11:08:59.929751 | orchestrator | Saturday 20 September 2025 11:02:51 +0000 (0:00:03.513) 0:02:43.685 **** 2025-09-20 11:08:59.929762 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:08:59.929773 | orchestrator | 2025-09-20 11:08:59.929784 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-20 11:08:59.929795 | orchestrator | Saturday 20 September 2025 11:02:54 +0000 (0:00:03.140) 0:02:46.825 **** 2025-09-20 11:08:59.929805 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-20 11:08:59.929816 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-20 11:08:59.929827 | orchestrator | 2025-09-20 11:08:59.929838 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-20 11:08:59.929855 | orchestrator | Saturday 20 September 2025 11:03:01 +0000 (0:00:06.847) 0:02:53.672 **** 2025-09-20 11:08:59.929873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.929895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.929916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.929938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.929952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.929965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.929977 | orchestrator | 2025-09-20 11:08:59.929988 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-20 11:08:59.929999 | orchestrator | Saturday 20 September 2025 11:03:03 +0000 (0:00:01.494) 0:02:55.166 **** 2025-09-20 11:08:59.930112 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.930128 | orchestrator | 2025-09-20 11:08:59.930140 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-20 11:08:59.930151 | orchestrator | Saturday 20 September 2025 11:03:03 +0000 (0:00:00.310) 0:02:55.477 **** 2025-09-20 11:08:59.930161 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.930173 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.930184 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.930194 | orchestrator | 2025-09-20 11:08:59.930206 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-20 11:08:59.930225 | orchestrator | Saturday 20 September 2025 11:03:03 +0000 (0:00:00.439) 0:02:55.917 **** 2025-09-20 11:08:59.930236 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:08:59.930247 | orchestrator | 2025-09-20 11:08:59.930257 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-20 11:08:59.930272 | orchestrator | Saturday 20 September 2025 11:03:04 +0000 (0:00:00.916) 0:02:56.833 **** 2025-09-20 11:08:59.930282 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.930292 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.930301 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.930311 | orchestrator | 2025-09-20 11:08:59.930321 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 11:08:59.930330 | orchestrator | Saturday 20 September 2025 11:03:05 +0000 (0:00:00.313) 0:02:57.146 **** 2025-09-20 11:08:59.930340 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.930350 | orchestrator | 2025-09-20 11:08:59.930359 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-20 11:08:59.930369 | orchestrator | Saturday 20 September 2025 11:03:05 +0000 (0:00:00.838) 0:02:57.985 **** 2025-09-20 11:08:59.930380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.930401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.930413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.930435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.930446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.930464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.930475 | orchestrator | 2025-09-20 11:08:59.930485 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-20 11:08:59.930495 | orchestrator | Saturday 20 September 2025 11:03:09 +0000 (0:00:03.678) 0:03:01.664 **** 2025-09-20 11:08:59.930505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.930523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.930534 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.930549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.930560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.930570 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.930589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.930606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.930617 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.930626 | orchestrator | 2025-09-20 11:08:59.930636 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-20 11:08:59.930646 | orchestrator | Saturday 20 September 2025 11:03:10 +0000 (0:00:00.941) 0:03:02.605 **** 2025-09-20 11:08:59.930665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.930676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.930687 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.930912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.931055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.931075 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.931103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.931117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.931129 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.931140 | orchestrator | 2025-09-20 11:08:59.931153 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-20 11:08:59.931165 | orchestrator | Saturday 20 September 2025 11:03:11 +0000 (0:00:00.898) 0:03:03.504 **** 2025-09-20 11:08:59.931196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931303 | orchestrator | 2025-09-20 11:08:59.931315 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-20 11:08:59.931326 | orchestrator | Saturday 20 September 2025 11:03:14 +0000 (0:00:02.736) 0:03:06.241 **** 2025-09-20 11:08:59.931338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931436 | orchestrator | 2025-09-20 11:08:59.931454 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-20 11:08:59.931468 | orchestrator | Saturday 20 September 2025 11:03:21 +0000 (0:00:07.562) 0:03:13.803 **** 2025-09-20 11:08:59.931481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.931500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.931520 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.931534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.931548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.931561 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.931579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 11:08:59.931595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.931608 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.931620 | orchestrator | 2025-09-20 11:08:59.931632 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-20 11:08:59.931652 | orchestrator | Saturday 20 September 2025 11:03:22 +0000 (0:00:01.131) 0:03:14.935 **** 2025-09-20 11:08:59.931665 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.931678 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.931689 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.931702 | orchestrator | 2025-09-20 11:08:59.931720 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-20 11:08:59.931734 | orchestrator | Saturday 20 September 2025 11:03:24 +0000 (0:00:01.820) 0:03:16.756 **** 2025-09-20 11:08:59.931747 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.931760 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.931772 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.931785 | orchestrator | 2025-09-20 11:08:59.931797 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-20 11:08:59.931809 | orchestrator | Saturday 20 September 2025 11:03:25 +0000 (0:00:00.659) 0:03:17.415 **** 2025-09-20 11:08:59.931821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 11:08:59.931900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.931923 | orchestrator | 2025-09-20 11:08:59.931935 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-20 11:08:59.931946 | orchestrator | Saturday 20 September 2025 11:03:28 +0000 (0:00:03.424) 0:03:20.840 **** 2025-09-20 11:08:59.931958 | orchestrator | 2025-09-20 11:08:59.931969 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-20 11:08:59.931980 | orchestrator | Saturday 20 September 2025 11:03:29 +0000 (0:00:00.247) 0:03:21.087 **** 2025-09-20 11:08:59.931991 | orchestrator | 2025-09-20 11:08:59.932003 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-20 11:08:59.932049 | orchestrator | Saturday 20 September 2025 11:03:29 +0000 (0:00:00.242) 0:03:21.330 **** 2025-09-20 11:08:59.932061 | orchestrator | 2025-09-20 11:08:59.932073 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-20 11:08:59.932084 | orchestrator | Saturday 20 September 2025 11:03:29 +0000 (0:00:00.255) 0:03:21.586 **** 2025-09-20 11:08:59.932095 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.932107 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.932118 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.932130 | orchestrator | 2025-09-20 11:08:59.932141 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-20 11:08:59.932153 | orchestrator | Saturday 20 September 2025 11:03:51 +0000 (0:00:22.055) 0:03:43.641 **** 2025-09-20 11:08:59.932164 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.932188 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.932200 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.932211 | orchestrator | 2025-09-20 11:08:59.932222 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-20 11:08:59.932234 | orchestrator | 2025-09-20 11:08:59.932245 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 11:08:59.932256 | orchestrator | Saturday 20 September 2025 11:04:04 +0000 (0:00:12.774) 0:03:56.415 **** 2025-09-20 11:08:59.932268 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.932279 | orchestrator | 2025-09-20 11:08:59.932291 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 11:08:59.932302 | orchestrator | Saturday 20 September 2025 11:04:06 +0000 (0:00:01.729) 0:03:58.145 **** 2025-09-20 11:08:59.932312 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.932324 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.932335 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.932346 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.932357 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.932368 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.932379 | orchestrator | 2025-09-20 11:08:59.932390 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-20 11:08:59.932402 | orchestrator | Saturday 20 September 2025 11:04:06 +0000 (0:00:00.785) 0:03:58.930 **** 2025-09-20 11:08:59.932413 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.932423 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.932434 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.932446 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:08:59.932457 | orchestrator | 2025-09-20 11:08:59.932469 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-20 11:08:59.932487 | orchestrator | Saturday 20 September 2025 11:04:08 +0000 (0:00:01.964) 0:04:00.894 **** 2025-09-20 11:08:59.932499 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-20 11:08:59.932511 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-20 11:08:59.932522 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-20 11:08:59.932533 | orchestrator | 2025-09-20 11:08:59.932544 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-20 11:08:59.932556 | orchestrator | Saturday 20 September 2025 11:04:09 +0000 (0:00:00.714) 0:04:01.609 **** 2025-09-20 11:08:59.932567 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-20 11:08:59.932578 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-20 11:08:59.932589 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-20 11:08:59.932600 | orchestrator | 2025-09-20 11:08:59.932611 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-20 11:08:59.932623 | orchestrator | Saturday 20 September 2025 11:04:10 +0000 (0:00:01.263) 0:04:02.873 **** 2025-09-20 11:08:59.932634 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-20 11:08:59.932645 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.932656 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-20 11:08:59.932667 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.932678 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-20 11:08:59.932689 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.932700 | orchestrator | 2025-09-20 11:08:59.932712 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-20 11:08:59.932723 | orchestrator | Saturday 20 September 2025 11:04:11 +0000 (0:00:00.895) 0:04:03.769 **** 2025-09-20 11:08:59.932734 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-20 11:08:59.932745 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-20 11:08:59.932764 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 11:08:59.932775 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-20 11:08:59.932786 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 11:08:59.932797 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.932808 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 11:08:59.932819 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 11:08:59.932830 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-20 11:08:59.932841 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-20 11:08:59.932852 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-20 11:08:59.932863 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.932875 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 11:08:59.932891 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 11:08:59.932902 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.932913 | orchestrator | 2025-09-20 11:08:59.932924 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-20 11:08:59.932935 | orchestrator | Saturday 20 September 2025 11:04:13 +0000 (0:00:01.949) 0:04:05.718 **** 2025-09-20 11:08:59.932946 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.932957 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.932969 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.932979 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.932990 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.933001 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.933041 | orchestrator | 2025-09-20 11:08:59.933053 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-20 11:08:59.933064 | orchestrator | Saturday 20 September 2025 11:04:15 +0000 (0:00:01.923) 0:04:07.642 **** 2025-09-20 11:08:59.933076 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.933087 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.933098 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.933110 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.933121 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.933132 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.933143 | orchestrator | 2025-09-20 11:08:59.933154 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-20 11:08:59.933165 | orchestrator | Saturday 20 September 2025 11:04:17 +0000 (0:00:01.378) 0:04:09.020 **** 2025-09-20 11:08:59.933178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933408 | orchestrator | 2025-09-20 11:08:59.933419 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 11:08:59.933430 | orchestrator | Saturday 20 September 2025 11:04:20 +0000 (0:00:03.242) 0:04:12.262 **** 2025-09-20 11:08:59.933442 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.933454 | orchestrator | 2025-09-20 11:08:59.933465 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-20 11:08:59.933476 | orchestrator | Saturday 20 September 2025 11:04:22 +0000 (0:00:02.666) 0:04:14.928 **** 2025-09-20 11:08:59.933493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.933716 | orchestrator | 2025-09-20 11:08:59.933728 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-20 11:08:59.933740 | orchestrator | Saturday 20 September 2025 11:04:28 +0000 (0:00:05.383) 0:04:20.311 **** 2025-09-20 11:08:59.933758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.933771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.933782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.933799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.933811 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.933823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.933851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.933864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.933875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.933887 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.933904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.933916 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.933928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.933946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.933957 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.933976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.933988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.933999 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.934112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.934127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934139 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.934150 | orchestrator | 2025-09-20 11:08:59.934167 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-20 11:08:59.934179 | orchestrator | Saturday 20 September 2025 11:04:31 +0000 (0:00:03.612) 0:04:23.924 **** 2025-09-20 11:08:59.934190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.934210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.934231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934243 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.934255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.934266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.934283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934301 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.934312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.934324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934335 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.934353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.934365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.934377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934394 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.934411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.934422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934433 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.934445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.934463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.934474 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.934485 | orchestrator | 2025-09-20 11:08:59.934497 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 11:08:59.934508 | orchestrator | Saturday 20 September 2025 11:04:36 +0000 (0:00:04.250) 0:04:28.175 **** 2025-09-20 11:08:59.934519 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.934530 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.934541 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.934551 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 11:08:59.934562 | orchestrator | 2025-09-20 11:08:59.934573 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-20 11:08:59.934583 | orchestrator | Saturday 20 September 2025 11:04:37 +0000 (0:00:01.814) 0:04:29.990 **** 2025-09-20 11:08:59.934593 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 11:08:59.934603 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 11:08:59.934612 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 11:08:59.934622 | orchestrator | 2025-09-20 11:08:59.934632 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-20 11:08:59.934642 | orchestrator | Saturday 20 September 2025 11:04:39 +0000 (0:00:01.253) 0:04:31.243 **** 2025-09-20 11:08:59.934651 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 11:08:59.934667 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 11:08:59.934677 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 11:08:59.934686 | orchestrator | 2025-09-20 11:08:59.934696 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-20 11:08:59.934706 | orchestrator | Saturday 20 September 2025 11:04:41 +0000 (0:00:02.174) 0:04:33.418 **** 2025-09-20 11:08:59.934716 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:08:59.934726 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:08:59.934735 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:08:59.934745 | orchestrator | 2025-09-20 11:08:59.934755 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-20 11:08:59.934765 | orchestrator | Saturday 20 September 2025 11:04:42 +0000 (0:00:00.849) 0:04:34.268 **** 2025-09-20 11:08:59.934775 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:08:59.934785 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:08:59.934794 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:08:59.934804 | orchestrator | 2025-09-20 11:08:59.934814 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-20 11:08:59.934823 | orchestrator | Saturday 20 September 2025 11:04:43 +0000 (0:00:00.864) 0:04:35.132 **** 2025-09-20 11:08:59.934837 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-20 11:08:59.934848 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-20 11:08:59.934858 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-20 11:08:59.934868 | orchestrator | 2025-09-20 11:08:59.934878 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-20 11:08:59.934888 | orchestrator | Saturday 20 September 2025 11:04:44 +0000 (0:00:01.640) 0:04:36.773 **** 2025-09-20 11:08:59.934898 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-20 11:08:59.934908 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-20 11:08:59.934918 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-20 11:08:59.934927 | orchestrator | 2025-09-20 11:08:59.934937 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-20 11:08:59.934947 | orchestrator | Saturday 20 September 2025 11:04:46 +0000 (0:00:01.434) 0:04:38.208 **** 2025-09-20 11:08:59.934956 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-20 11:08:59.934966 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-20 11:08:59.934976 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-20 11:08:59.934986 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-20 11:08:59.934995 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-20 11:08:59.935005 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-20 11:08:59.935035 | orchestrator | 2025-09-20 11:08:59.935045 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-20 11:08:59.935055 | orchestrator | Saturday 20 September 2025 11:04:52 +0000 (0:00:06.067) 0:04:44.276 **** 2025-09-20 11:08:59.935065 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.935074 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.935084 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.935093 | orchestrator | 2025-09-20 11:08:59.935103 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-20 11:08:59.935113 | orchestrator | Saturday 20 September 2025 11:04:52 +0000 (0:00:00.406) 0:04:44.682 **** 2025-09-20 11:08:59.935123 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.935132 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.935142 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.935151 | orchestrator | 2025-09-20 11:08:59.935161 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-20 11:08:59.935171 | orchestrator | Saturday 20 September 2025 11:04:52 +0000 (0:00:00.327) 0:04:45.010 **** 2025-09-20 11:08:59.935181 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.935191 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.935207 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.935217 | orchestrator | 2025-09-20 11:08:59.935232 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-20 11:08:59.935243 | orchestrator | Saturday 20 September 2025 11:04:54 +0000 (0:00:01.628) 0:04:46.638 **** 2025-09-20 11:08:59.935253 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-20 11:08:59.935264 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-20 11:08:59.935274 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-20 11:08:59.935284 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-20 11:08:59.935294 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-20 11:08:59.935304 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-20 11:08:59.935314 | orchestrator | 2025-09-20 11:08:59.935325 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-20 11:08:59.935335 | orchestrator | Saturday 20 September 2025 11:04:58 +0000 (0:00:04.190) 0:04:50.828 **** 2025-09-20 11:08:59.935345 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 11:08:59.935355 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 11:08:59.935365 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 11:08:59.935375 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 11:08:59.935385 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.935395 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 11:08:59.935405 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.935415 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 11:08:59.935425 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.935435 | orchestrator | 2025-09-20 11:08:59.935444 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-20 11:08:59.935454 | orchestrator | Saturday 20 September 2025 11:05:02 +0000 (0:00:03.755) 0:04:54.583 **** 2025-09-20 11:08:59.935464 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.935474 | orchestrator | 2025-09-20 11:08:59.935484 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-20 11:08:59.935495 | orchestrator | Saturday 20 September 2025 11:05:02 +0000 (0:00:00.139) 0:04:54.723 **** 2025-09-20 11:08:59.935504 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.935514 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.935524 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.935534 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.935543 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.935553 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.935563 | orchestrator | 2025-09-20 11:08:59.935578 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-20 11:08:59.935588 | orchestrator | Saturday 20 September 2025 11:05:03 +0000 (0:00:00.778) 0:04:55.502 **** 2025-09-20 11:08:59.935598 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 11:08:59.935608 | orchestrator | 2025-09-20 11:08:59.935618 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-20 11:08:59.935628 | orchestrator | Saturday 20 September 2025 11:05:04 +0000 (0:00:00.779) 0:04:56.281 **** 2025-09-20 11:08:59.935638 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.935648 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.935658 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.935668 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.935683 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.935693 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.935703 | orchestrator | 2025-09-20 11:08:59.935713 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-20 11:08:59.935723 | orchestrator | Saturday 20 September 2025 11:05:05 +0000 (0:00:00.872) 0:04:57.153 **** 2025-09-20 11:08:59.935734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.935898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936491 | orchestrator | 2025-09-20 11:08:59.936502 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-20 11:08:59.936512 | orchestrator | Saturday 20 September 2025 11:05:09 +0000 (0:00:04.088) 0:05:01.242 **** 2025-09-20 11:08:59.936522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.936539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.936559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.936570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.936609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.936621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.936632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.936774 | orchestrator | 2025-09-20 11:08:59.936784 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-20 11:08:59.936794 | orchestrator | Saturday 20 September 2025 11:05:15 +0000 (0:00:06.147) 0:05:07.390 **** 2025-09-20 11:08:59.936804 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.936814 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.936824 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.936834 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.936844 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.936853 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.936863 | orchestrator | 2025-09-20 11:08:59.936873 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-20 11:08:59.936883 | orchestrator | Saturday 20 September 2025 11:05:16 +0000 (0:00:01.194) 0:05:08.584 **** 2025-09-20 11:08:59.936893 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-20 11:08:59.936902 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-20 11:08:59.936912 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-20 11:08:59.936922 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-20 11:08:59.936937 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-20 11:08:59.936947 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-20 11:08:59.936956 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-20 11:08:59.936966 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.936976 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-20 11:08:59.936986 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.936996 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-20 11:08:59.937005 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937070 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-20 11:08:59.937083 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-20 11:08:59.937093 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-20 11:08:59.937108 | orchestrator | 2025-09-20 11:08:59.937117 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-20 11:08:59.937127 | orchestrator | Saturday 20 September 2025 11:05:20 +0000 (0:00:04.366) 0:05:12.950 **** 2025-09-20 11:08:59.937136 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.937144 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.937153 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.937162 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937171 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.937180 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.937189 | orchestrator | 2025-09-20 11:08:59.937198 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-20 11:08:59.937207 | orchestrator | Saturday 20 September 2025 11:05:21 +0000 (0:00:00.584) 0:05:13.535 **** 2025-09-20 11:08:59.937216 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-20 11:08:59.937225 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-20 11:08:59.937234 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-20 11:08:59.937243 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-20 11:08:59.937253 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-20 11:08:59.937261 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-20 11:08:59.937274 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-20 11:08:59.937284 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-20 11:08:59.937292 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-20 11:08:59.937301 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-20 11:08:59.937310 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.937319 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-20 11:08:59.937328 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.937337 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-20 11:08:59.937346 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937355 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-20 11:08:59.937364 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-20 11:08:59.937374 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-20 11:08:59.937382 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-20 11:08:59.937391 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-20 11:08:59.937400 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-20 11:08:59.937409 | orchestrator | 2025-09-20 11:08:59.937419 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-20 11:08:59.937427 | orchestrator | Saturday 20 September 2025 11:05:26 +0000 (0:00:05.383) 0:05:18.919 **** 2025-09-20 11:08:59.937440 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 11:08:59.937448 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 11:08:59.937488 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 11:08:59.937498 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 11:08:59.937506 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 11:08:59.937514 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-20 11:08:59.937522 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 11:08:59.937529 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-20 11:08:59.937537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-20 11:08:59.937545 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 11:08:59.937553 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 11:08:59.937561 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 11:08:59.937569 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-20 11:08:59.937576 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.937585 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 11:08:59.937592 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 11:08:59.937600 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-20 11:08:59.937608 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937616 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-20 11:08:59.937623 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.937631 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 11:08:59.937639 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 11:08:59.937647 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 11:08:59.937655 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 11:08:59.937663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 11:08:59.937671 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 11:08:59.937678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 11:08:59.937686 | orchestrator | 2025-09-20 11:08:59.937694 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-20 11:08:59.937706 | orchestrator | Saturday 20 September 2025 11:05:34 +0000 (0:00:07.478) 0:05:26.398 **** 2025-09-20 11:08:59.937714 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.937723 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.937730 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.937738 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937746 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.937754 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.937762 | orchestrator | 2025-09-20 11:08:59.937770 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-20 11:08:59.937778 | orchestrator | Saturday 20 September 2025 11:05:35 +0000 (0:00:00.751) 0:05:27.149 **** 2025-09-20 11:08:59.937786 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.937799 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.937807 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.937815 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937823 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.937830 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.937838 | orchestrator | 2025-09-20 11:08:59.937846 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-20 11:08:59.937854 | orchestrator | Saturday 20 September 2025 11:05:35 +0000 (0:00:00.644) 0:05:27.794 **** 2025-09-20 11:08:59.937862 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.937870 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.937878 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.937885 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.937893 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.937901 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.937909 | orchestrator | 2025-09-20 11:08:59.937917 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-20 11:08:59.937925 | orchestrator | Saturday 20 September 2025 11:05:38 +0000 (0:00:02.785) 0:05:30.579 **** 2025-09-20 11:08:59.937955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.937966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.937974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.937983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.937997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.938005 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.938053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.938062 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.938097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.938107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.938152 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.938162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.938175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.938190 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.938198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 11:08:59.938207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 11:08:59.938243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.938253 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.938261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 11:08:59.938270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 11:08:59.938283 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.938292 | orchestrator | 2025-09-20 11:08:59.938300 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-20 11:08:59.938308 | orchestrator | Saturday 20 September 2025 11:05:40 +0000 (0:00:01.572) 0:05:32.152 **** 2025-09-20 11:08:59.938316 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-20 11:08:59.938324 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-20 11:08:59.938332 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.938340 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-20 11:08:59.938349 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-20 11:08:59.938357 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.938369 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-20 11:08:59.938377 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-20 11:08:59.938385 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.938393 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-20 11:08:59.938401 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-20 11:08:59.938409 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.938417 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-20 11:08:59.938426 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-20 11:08:59.938434 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.938442 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-20 11:08:59.938450 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-20 11:08:59.938458 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.938466 | orchestrator | 2025-09-20 11:08:59.938474 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-20 11:08:59.938482 | orchestrator | Saturday 20 September 2025 11:05:40 +0000 (0:00:00.809) 0:05:32.961 **** 2025-09-20 11:08:59.938490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 11:08:59.938711 | orchestrator | 2025-09-20 11:08:59.938720 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 11:08:59.938728 | orchestrator | Saturday 20 September 2025 11:05:44 +0000 (0:00:03.382) 0:05:36.343 **** 2025-09-20 11:08:59.938736 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.938744 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.938752 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.938760 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.938768 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.938776 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.938783 | orchestrator | 2025-09-20 11:08:59.938791 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 11:08:59.938799 | orchestrator | Saturday 20 September 2025 11:05:44 +0000 (0:00:00.665) 0:05:37.009 **** 2025-09-20 11:08:59.938807 | orchestrator | 2025-09-20 11:08:59.938815 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 11:08:59.938823 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:00.126) 0:05:37.135 **** 2025-09-20 11:08:59.938831 | orchestrator | 2025-09-20 11:08:59.938839 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 11:08:59.938847 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:00.127) 0:05:37.263 **** 2025-09-20 11:08:59.938855 | orchestrator | 2025-09-20 11:08:59.938863 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 11:08:59.938871 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:00.128) 0:05:37.392 **** 2025-09-20 11:08:59.938879 | orchestrator | 2025-09-20 11:08:59.938887 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 11:08:59.938895 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:00.125) 0:05:37.517 **** 2025-09-20 11:08:59.938903 | orchestrator | 2025-09-20 11:08:59.938914 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 11:08:59.938923 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:00.123) 0:05:37.640 **** 2025-09-20 11:08:59.938931 | orchestrator | 2025-09-20 11:08:59.938939 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-20 11:08:59.938947 | orchestrator | Saturday 20 September 2025 11:05:45 +0000 (0:00:00.234) 0:05:37.874 **** 2025-09-20 11:08:59.938955 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.938962 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.938970 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.938978 | orchestrator | 2025-09-20 11:08:59.938986 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-20 11:08:59.938994 | orchestrator | Saturday 20 September 2025 11:05:59 +0000 (0:00:13.811) 0:05:51.685 **** 2025-09-20 11:08:59.939002 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.939025 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.939033 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.939041 | orchestrator | 2025-09-20 11:08:59.939049 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-20 11:08:59.939057 | orchestrator | Saturday 20 September 2025 11:06:19 +0000 (0:00:19.470) 0:06:11.155 **** 2025-09-20 11:08:59.939065 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.939073 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.939081 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.939089 | orchestrator | 2025-09-20 11:08:59.939097 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-20 11:08:59.939111 | orchestrator | Saturday 20 September 2025 11:06:43 +0000 (0:00:24.272) 0:06:35.428 **** 2025-09-20 11:08:59.939119 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.939127 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.939136 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.939143 | orchestrator | 2025-09-20 11:08:59.939151 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-20 11:08:59.939160 | orchestrator | Saturday 20 September 2025 11:07:26 +0000 (0:00:42.786) 0:07:18.215 **** 2025-09-20 11:08:59.939168 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.939175 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.939183 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.939191 | orchestrator | 2025-09-20 11:08:59.939199 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-20 11:08:59.939207 | orchestrator | Saturday 20 September 2025 11:07:27 +0000 (0:00:01.138) 0:07:19.353 **** 2025-09-20 11:08:59.939215 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.939223 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.939231 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.939239 | orchestrator | 2025-09-20 11:08:59.939247 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-20 11:08:59.939277 | orchestrator | Saturday 20 September 2025 11:07:28 +0000 (0:00:00.763) 0:07:20.117 **** 2025-09-20 11:08:59.939287 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:08:59.939295 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:08:59.939303 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:08:59.939311 | orchestrator | 2025-09-20 11:08:59.939319 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-20 11:08:59.939327 | orchestrator | Saturday 20 September 2025 11:07:51 +0000 (0:00:23.855) 0:07:43.973 **** 2025-09-20 11:08:59.939335 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.939342 | orchestrator | 2025-09-20 11:08:59.939350 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-20 11:08:59.939358 | orchestrator | Saturday 20 September 2025 11:07:52 +0000 (0:00:00.128) 0:07:44.101 **** 2025-09-20 11:08:59.939366 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.939374 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.939382 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.939390 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.939398 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.939406 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-20 11:08:59.939414 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 11:08:59.939422 | orchestrator | 2025-09-20 11:08:59.939430 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-20 11:08:59.939438 | orchestrator | Saturday 20 September 2025 11:08:12 +0000 (0:00:20.280) 0:08:04.381 **** 2025-09-20 11:08:59.939446 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.939454 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.939462 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.939470 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.939478 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.939486 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.939494 | orchestrator | 2025-09-20 11:08:59.939502 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-20 11:08:59.939509 | orchestrator | Saturday 20 September 2025 11:08:22 +0000 (0:00:10.323) 0:08:14.705 **** 2025-09-20 11:08:59.939517 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.939525 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.939533 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.939541 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.939549 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.939561 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-20 11:08:59.939569 | orchestrator | 2025-09-20 11:08:59.939577 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-20 11:08:59.939585 | orchestrator | Saturday 20 September 2025 11:08:26 +0000 (0:00:03.792) 0:08:18.498 **** 2025-09-20 11:08:59.939593 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 11:08:59.939601 | orchestrator | 2025-09-20 11:08:59.939609 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-20 11:08:59.939617 | orchestrator | Saturday 20 September 2025 11:08:38 +0000 (0:00:11.541) 0:08:30.039 **** 2025-09-20 11:08:59.939629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 11:08:59.939637 | orchestrator | 2025-09-20 11:08:59.939645 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-20 11:08:59.939653 | orchestrator | Saturday 20 September 2025 11:08:39 +0000 (0:00:01.191) 0:08:31.230 **** 2025-09-20 11:08:59.939661 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.939670 | orchestrator | 2025-09-20 11:08:59.939678 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-20 11:08:59.939686 | orchestrator | Saturday 20 September 2025 11:08:40 +0000 (0:00:01.258) 0:08:32.489 **** 2025-09-20 11:08:59.939693 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 11:08:59.939701 | orchestrator | 2025-09-20 11:08:59.939709 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-20 11:08:59.939718 | orchestrator | Saturday 20 September 2025 11:08:50 +0000 (0:00:09.852) 0:08:42.341 **** 2025-09-20 11:08:59.939726 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:08:59.939734 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:08:59.939741 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:08:59.939749 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.939757 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:59.939765 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:59.939772 | orchestrator | 2025-09-20 11:08:59.939780 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-20 11:08:59.939788 | orchestrator | 2025-09-20 11:08:59.939796 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-20 11:08:59.939804 | orchestrator | Saturday 20 September 2025 11:08:51 +0000 (0:00:01.641) 0:08:43.983 **** 2025-09-20 11:08:59.939812 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.939820 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.939828 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.939836 | orchestrator | 2025-09-20 11:08:59.939844 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-20 11:08:59.939852 | orchestrator | 2025-09-20 11:08:59.939860 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-20 11:08:59.939868 | orchestrator | Saturday 20 September 2025 11:08:53 +0000 (0:00:01.045) 0:08:45.029 **** 2025-09-20 11:08:59.939876 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.939883 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.939891 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.939899 | orchestrator | 2025-09-20 11:08:59.939907 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-20 11:08:59.939915 | orchestrator | 2025-09-20 11:08:59.939923 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-20 11:08:59.939931 | orchestrator | Saturday 20 September 2025 11:08:53 +0000 (0:00:00.443) 0:08:45.472 **** 2025-09-20 11:08:59.939939 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-20 11:08:59.939968 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-20 11:08:59.939977 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-20 11:08:59.939985 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-20 11:08:59.939993 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-20 11:08:59.940049 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-20 11:08:59.940060 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:08:59.940068 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-20 11:08:59.940076 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-20 11:08:59.940084 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-20 11:08:59.940093 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-20 11:08:59.940101 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-20 11:08:59.940109 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-20 11:08:59.940117 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:08:59.940125 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-20 11:08:59.940133 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-20 11:08:59.940141 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-20 11:08:59.940150 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-20 11:08:59.940157 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-20 11:08:59.940165 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-20 11:08:59.940173 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:08:59.940181 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-20 11:08:59.940189 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-20 11:08:59.940197 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-20 11:08:59.940205 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-20 11:08:59.940213 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-20 11:08:59.940221 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-20 11:08:59.940229 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.940237 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-20 11:08:59.940245 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-20 11:08:59.940252 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-20 11:08:59.940260 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-20 11:08:59.940268 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-20 11:08:59.940276 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-20 11:08:59.940284 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.940292 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-20 11:08:59.940299 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-20 11:08:59.940314 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-20 11:08:59.940322 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-20 11:08:59.940330 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-20 11:08:59.940338 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-20 11:08:59.940345 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.940353 | orchestrator | 2025-09-20 11:08:59.940361 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-20 11:08:59.940369 | orchestrator | 2025-09-20 11:08:59.940378 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-20 11:08:59.940386 | orchestrator | Saturday 20 September 2025 11:08:54 +0000 (0:00:01.159) 0:08:46.631 **** 2025-09-20 11:08:59.940394 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-20 11:08:59.940402 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-20 11:08:59.940410 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.940418 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-20 11:08:59.940432 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-20 11:08:59.940440 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.940448 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-20 11:08:59.940456 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-20 11:08:59.940464 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.940472 | orchestrator | 2025-09-20 11:08:59.940480 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-20 11:08:59.940489 | orchestrator | 2025-09-20 11:08:59.940497 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-20 11:08:59.940505 | orchestrator | Saturday 20 September 2025 11:08:55 +0000 (0:00:00.834) 0:08:47.466 **** 2025-09-20 11:08:59.940513 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.940521 | orchestrator | 2025-09-20 11:08:59.940529 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-20 11:08:59.940537 | orchestrator | 2025-09-20 11:08:59.940545 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-20 11:08:59.940553 | orchestrator | Saturday 20 September 2025 11:08:56 +0000 (0:00:00.760) 0:08:48.226 **** 2025-09-20 11:08:59.940561 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.940568 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.940575 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.940582 | orchestrator | 2025-09-20 11:08:59.940588 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:08:59.940595 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:08:59.940625 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-20 11:08:59.940633 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-20 11:08:59.940640 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-20 11:08:59.940647 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-20 11:08:59.940654 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-20 11:08:59.940661 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-20 11:08:59.940669 | orchestrator | 2025-09-20 11:08:59.940675 | orchestrator | 2025-09-20 11:08:59.940682 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:08:59.940689 | orchestrator | Saturday 20 September 2025 11:08:56 +0000 (0:00:00.434) 0:08:48.661 **** 2025-09-20 11:08:59.940696 | orchestrator | =============================================================================== 2025-09-20 11:08:59.940705 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.79s 2025-09-20 11:08:59.940718 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 24.52s 2025-09-20 11:08:59.940729 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.27s 2025-09-20 11:08:59.940741 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.86s 2025-09-20 11:08:59.940752 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.06s 2025-09-20 11:08:59.940763 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.28s 2025-09-20 11:08:59.940773 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.22s 2025-09-20 11:08:59.940793 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.47s 2025-09-20 11:08:59.940804 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.80s 2025-09-20 11:08:59.940815 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.81s 2025-09-20 11:08:59.940826 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.77s 2025-09-20 11:08:59.940837 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.09s 2025-09-20 11:08:59.940854 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.54s 2025-09-20 11:08:59.940867 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.21s 2025-09-20 11:08:59.940879 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.54s 2025-09-20 11:08:59.940891 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.32s 2025-09-20 11:08:59.940903 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.18s 2025-09-20 11:08:59.940910 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.85s 2025-09-20 11:08:59.940917 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.12s 2025-09-20 11:08:59.940923 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 7.56s 2025-09-20 11:08:59.940931 | orchestrator | 2025-09-20 11:08:59.940938 | orchestrator | 2025-09-20 11:08:59.940945 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:08:59.940952 | orchestrator | 2025-09-20 11:08:59.940959 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:08:59.940966 | orchestrator | Saturday 20 September 2025 11:06:38 +0000 (0:00:00.273) 0:00:00.273 **** 2025-09-20 11:08:59.940973 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.940980 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:59.940987 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:59.940994 | orchestrator | 2025-09-20 11:08:59.941001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:08:59.941023 | orchestrator | Saturday 20 September 2025 11:06:38 +0000 (0:00:00.303) 0:00:00.576 **** 2025-09-20 11:08:59.941030 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-20 11:08:59.941037 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-20 11:08:59.941044 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-20 11:08:59.941051 | orchestrator | 2025-09-20 11:08:59.941058 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-20 11:08:59.941065 | orchestrator | 2025-09-20 11:08:59.941072 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-20 11:08:59.941079 | orchestrator | Saturday 20 September 2025 11:06:39 +0000 (0:00:00.437) 0:00:01.013 **** 2025-09-20 11:08:59.941086 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.941093 | orchestrator | 2025-09-20 11:08:59.941100 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-20 11:08:59.941107 | orchestrator | Saturday 20 September 2025 11:06:39 +0000 (0:00:00.508) 0:00:01.522 **** 2025-09-20 11:08:59.941146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941178 | orchestrator | 2025-09-20 11:08:59.941185 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-20 11:08:59.941193 | orchestrator | Saturday 20 September 2025 11:06:40 +0000 (0:00:00.802) 0:00:02.324 **** 2025-09-20 11:08:59.941199 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-20 11:08:59.941211 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-20 11:08:59.941219 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:08:59.941226 | orchestrator | 2025-09-20 11:08:59.941233 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-20 11:08:59.941240 | orchestrator | Saturday 20 September 2025 11:06:41 +0000 (0:00:00.859) 0:00:03.184 **** 2025-09-20 11:08:59.941247 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:08:59.941255 | orchestrator | 2025-09-20 11:08:59.941261 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-20 11:08:59.941268 | orchestrator | Saturday 20 September 2025 11:06:42 +0000 (0:00:00.759) 0:00:03.944 **** 2025-09-20 11:08:59.941276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941324 | orchestrator | 2025-09-20 11:08:59.941331 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-20 11:08:59.941338 | orchestrator | Saturday 20 September 2025 11:06:43 +0000 (0:00:01.559) 0:00:05.504 **** 2025-09-20 11:08:59.941345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 11:08:59.941352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.941359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 11:08:59.941370 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.941377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 11:08:59.941384 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.941390 | orchestrator | 2025-09-20 11:08:59.941397 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-20 11:08:59.941404 | orchestrator | Saturday 20 September 2025 11:06:44 +0000 (0:00:00.692) 0:00:06.196 **** 2025-09-20 11:08:59.941411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 11:08:59.941439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 11:08:59.941451 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.941458 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.941465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 11:08:59.941473 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.941480 | orchestrator | 2025-09-20 11:08:59.941486 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-20 11:08:59.941493 | orchestrator | Saturday 20 September 2025 11:06:46 +0000 (0:00:01.592) 0:00:07.789 **** 2025-09-20 11:08:59.941500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941525 | orchestrator | 2025-09-20 11:08:59.941532 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-20 11:08:59.941539 | orchestrator | Saturday 20 September 2025 11:06:47 +0000 (0:00:01.433) 0:00:09.222 **** 2025-09-20 11:08:59.941569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.941592 | orchestrator | 2025-09-20 11:08:59.941599 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-20 11:08:59.941606 | orchestrator | Saturday 20 September 2025 11:06:49 +0000 (0:00:01.615) 0:00:10.838 **** 2025-09-20 11:08:59.941612 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.941619 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.941626 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.941633 | orchestrator | 2025-09-20 11:08:59.941640 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-20 11:08:59.941647 | orchestrator | Saturday 20 September 2025 11:06:49 +0000 (0:00:00.432) 0:00:11.270 **** 2025-09-20 11:08:59.941653 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-20 11:08:59.941660 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-20 11:08:59.941667 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-20 11:08:59.941674 | orchestrator | 2025-09-20 11:08:59.941680 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-20 11:08:59.941691 | orchestrator | Saturday 20 September 2025 11:06:50 +0000 (0:00:01.169) 0:00:12.440 **** 2025-09-20 11:08:59.941698 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-20 11:08:59.941705 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-20 11:08:59.941712 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-20 11:08:59.941718 | orchestrator | 2025-09-20 11:08:59.941725 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-20 11:08:59.941732 | orchestrator | Saturday 20 September 2025 11:06:52 +0000 (0:00:01.445) 0:00:13.886 **** 2025-09-20 11:08:59.941743 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 11:08:59.941750 | orchestrator | 2025-09-20 11:08:59.941757 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-20 11:08:59.941763 | orchestrator | Saturday 20 September 2025 11:06:52 +0000 (0:00:00.609) 0:00:14.495 **** 2025-09-20 11:08:59.941770 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-20 11:08:59.941777 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-20 11:08:59.941784 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.941791 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:08:59.941797 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:08:59.941804 | orchestrator | 2025-09-20 11:08:59.941811 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-20 11:08:59.941818 | orchestrator | Saturday 20 September 2025 11:06:53 +0000 (0:00:00.713) 0:00:15.209 **** 2025-09-20 11:08:59.941824 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.941831 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.941838 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.941844 | orchestrator | 2025-09-20 11:08:59.941851 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-20 11:08:59.941858 | orchestrator | Saturday 20 September 2025 11:06:54 +0000 (0:00:00.521) 0:00:15.730 **** 2025-09-20 11:08:59.941887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1102108, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0129464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1102108, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0129464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1102108, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0129464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1102153, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.025325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1102153, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.025325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1102153, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.025325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102118, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0165143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102118, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0165143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102118, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0165143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1102157, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0279465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.941999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1102157, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0279465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1102157, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0279465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1102130, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0206072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1102130, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0206072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1102130, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0206072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1102144, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0240688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1102144, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0240688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1102144, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0240688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102105, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.011917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102105, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.011917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102105, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.011917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102110, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0145006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102110, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0145006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102110, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0145006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102122, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.016971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102122, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.016971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102122, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.016971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1102135, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0221674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1102135, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0221674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1102135, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0221674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1102149, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.024782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1102149, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.024782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1102149, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.024782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102116, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0149577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102116, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0149577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102116, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0149577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1102142, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0233967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1102142, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0233967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1102142, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0233967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1102132, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0213377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1102132, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0213377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1102132, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0213377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1102128, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0193932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1102128, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0193932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1102128, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0193932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1102126, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0183477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1102126, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0183477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1102126, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0183477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1102138, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.023041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1102138, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.023041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1102138, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.023041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1102123, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0172877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1102123, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0172877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1102147, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0240688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1102123, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0172877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1102147, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0240688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102268, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0659468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102268, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0659468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1102147, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0240688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102191, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0409465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102191, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0409465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102268, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0659468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102180, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0320015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102180, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0320015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102191, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0409465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1102215, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0459673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1102215, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0459673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102180, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0320015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102172, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0289464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102172, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0289464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1102215, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0459673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102247, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0571706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102247, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0571706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102172, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0289464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102219, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0537367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102219, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0537367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102247, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0571706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1102250, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0582404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1102250, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0582404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102219, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0537367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102263, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0649562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102263, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0649562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1102250, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0582404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1102244, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0563428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1102244, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0563428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102263, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0649562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102207, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0445993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102207, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0445993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1102244, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0563428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102189, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0359464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102189, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0359464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102207, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0445993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102200, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0432506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102200, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0432506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102189, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0359464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102182, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0347826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102182, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0347826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102200, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0432506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1102212, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.04555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1102212, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.04555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102182, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0347826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102257, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0645018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102257, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0645018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1102212, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.04555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102254, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0599468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102254, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0599468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.942992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102257, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0645018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102174, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.030212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102174, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.030212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102254, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0599468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102177, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0311177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102177, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0311177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102174, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.030212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102241, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0554235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102241, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0554235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102177, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0311177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1102252, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0593057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1102252, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0593057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102241, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0554235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1102252, 'dev': 116, 'nlink': 1, 'atime': 1758360548.0, 'mtime': 1758360548.0, 'ctime': 1758363531.0593057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 11:08:59.943133 | orchestrator | 2025-09-20 11:08:59.943143 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-20 11:08:59.943150 | orchestrator | Saturday 20 September 2025 11:07:31 +0000 (0:00:36.963) 0:00:52.694 **** 2025-09-20 11:08:59.943157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.943165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.943172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 11:08:59.943179 | orchestrator | 2025-09-20 11:08:59.943189 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-20 11:08:59.943196 | orchestrator | Saturday 20 September 2025 11:07:32 +0000 (0:00:01.450) 0:00:54.144 **** 2025-09-20 11:08:59.943203 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.943210 | orchestrator | 2025-09-20 11:08:59.943216 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-20 11:08:59.943223 | orchestrator | Saturday 20 September 2025 11:07:35 +0000 (0:00:02.515) 0:00:56.659 **** 2025-09-20 11:08:59.943234 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.943241 | orchestrator | 2025-09-20 11:08:59.943248 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-20 11:08:59.943255 | orchestrator | Saturday 20 September 2025 11:07:37 +0000 (0:00:02.159) 0:00:58.819 **** 2025-09-20 11:08:59.943261 | orchestrator | 2025-09-20 11:08:59.943268 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-20 11:08:59.943275 | orchestrator | Saturday 20 September 2025 11:07:37 +0000 (0:00:00.060) 0:00:58.880 **** 2025-09-20 11:08:59.943282 | orchestrator | 2025-09-20 11:08:59.943288 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-20 11:08:59.943295 | orchestrator | Saturday 20 September 2025 11:07:37 +0000 (0:00:00.069) 0:00:58.949 **** 2025-09-20 11:08:59.943302 | orchestrator | 2025-09-20 11:08:59.943308 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-20 11:08:59.943315 | orchestrator | Saturday 20 September 2025 11:07:37 +0000 (0:00:00.168) 0:00:59.118 **** 2025-09-20 11:08:59.943322 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.943328 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.943335 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:08:59.943342 | orchestrator | 2025-09-20 11:08:59.943348 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-20 11:08:59.943355 | orchestrator | Saturday 20 September 2025 11:07:39 +0000 (0:00:01.718) 0:01:00.837 **** 2025-09-20 11:08:59.943361 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.943368 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.943375 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-20 11:08:59.943382 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-20 11:08:59.943388 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-20 11:08:59.943395 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.943402 | orchestrator | 2025-09-20 11:08:59.943408 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-20 11:08:59.943415 | orchestrator | Saturday 20 September 2025 11:08:17 +0000 (0:00:38.103) 0:01:38.940 **** 2025-09-20 11:08:59.943425 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.943432 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:08:59.943439 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:08:59.943446 | orchestrator | 2025-09-20 11:08:59.943452 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-20 11:08:59.943459 | orchestrator | Saturday 20 September 2025 11:08:52 +0000 (0:00:34.833) 0:02:13.774 **** 2025-09-20 11:08:59.943466 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:08:59.943473 | orchestrator | 2025-09-20 11:08:59.943479 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-20 11:08:59.943486 | orchestrator | Saturday 20 September 2025 11:08:54 +0000 (0:00:01.942) 0:02:15.716 **** 2025-09-20 11:08:59.943493 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.943499 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:08:59.943506 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:08:59.943513 | orchestrator | 2025-09-20 11:08:59.943519 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-20 11:08:59.943526 | orchestrator | Saturday 20 September 2025 11:08:54 +0000 (0:00:00.411) 0:02:16.128 **** 2025-09-20 11:08:59.943534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-20 11:08:59.943542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-20 11:08:59.943553 | orchestrator | 2025-09-20 11:08:59.943560 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-20 11:08:59.943567 | orchestrator | Saturday 20 September 2025 11:08:56 +0000 (0:00:02.172) 0:02:18.301 **** 2025-09-20 11:08:59.943574 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:08:59.943581 | orchestrator | 2025-09-20 11:08:59.943587 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:08:59.943594 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 11:08:59.943601 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 11:08:59.943608 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 11:08:59.943614 | orchestrator | 2025-09-20 11:08:59.943621 | orchestrator | 2025-09-20 11:08:59.943631 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:08:59.943638 | orchestrator | Saturday 20 September 2025 11:08:56 +0000 (0:00:00.303) 0:02:18.604 **** 2025-09-20 11:08:59.943645 | orchestrator | =============================================================================== 2025-09-20 11:08:59.943651 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.10s 2025-09-20 11:08:59.943658 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.96s 2025-09-20 11:08:59.943665 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.83s 2025-09-20 11:08:59.943671 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.52s 2025-09-20 11:08:59.943678 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.17s 2025-09-20 11:08:59.943684 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.16s 2025-09-20 11:08:59.943691 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.94s 2025-09-20 11:08:59.943698 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.72s 2025-09-20 11:08:59.943704 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.62s 2025-09-20 11:08:59.943711 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.59s 2025-09-20 11:08:59.943718 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.56s 2025-09-20 11:08:59.943724 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.45s 2025-09-20 11:08:59.943731 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.45s 2025-09-20 11:08:59.943738 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.43s 2025-09-20 11:08:59.943744 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.17s 2025-09-20 11:08:59.943751 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2025-09-20 11:08:59.943758 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-09-20 11:08:59.943765 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.76s 2025-09-20 11:08:59.943772 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2025-09-20 11:08:59.943778 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.69s 2025-09-20 11:08:59.943788 | orchestrator | 2025-09-20 11:08:59 | INFO  | Task 0d467328-9f09-4fc5-9e30-3e60bce2bfd5 is in state SUCCESS 2025-09-20 11:08:59.943796 | orchestrator | 2025-09-20 11:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:02.973739 | orchestrator | 2025-09-20 11:09:02 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:02.973833 | orchestrator | 2025-09-20 11:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:06.017424 | orchestrator | 2025-09-20 11:09:06 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:06.017528 | orchestrator | 2025-09-20 11:09:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:09.052778 | orchestrator | 2025-09-20 11:09:09 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:09.052909 | orchestrator | 2025-09-20 11:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:12.090345 | orchestrator | 2025-09-20 11:09:12 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:12.090501 | orchestrator | 2025-09-20 11:09:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:15.135657 | orchestrator | 2025-09-20 11:09:15 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:15.135762 | orchestrator | 2025-09-20 11:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:18.178327 | orchestrator | 2025-09-20 11:09:18 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:18.178433 | orchestrator | 2025-09-20 11:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:21.220982 | orchestrator | 2025-09-20 11:09:21 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:21.221134 | orchestrator | 2025-09-20 11:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:24.262168 | orchestrator | 2025-09-20 11:09:24 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:24.262255 | orchestrator | 2025-09-20 11:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:27.306656 | orchestrator | 2025-09-20 11:09:27 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:27.306763 | orchestrator | 2025-09-20 11:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:30.348565 | orchestrator | 2025-09-20 11:09:30 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:30.348688 | orchestrator | 2025-09-20 11:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:33.411030 | orchestrator | 2025-09-20 11:09:33 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:33.411122 | orchestrator | 2025-09-20 11:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:36.459152 | orchestrator | 2025-09-20 11:09:36 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:36.459259 | orchestrator | 2025-09-20 11:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:39.492503 | orchestrator | 2025-09-20 11:09:39 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:39.492604 | orchestrator | 2025-09-20 11:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:42.541567 | orchestrator | 2025-09-20 11:09:42 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:42.541694 | orchestrator | 2025-09-20 11:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:45.583135 | orchestrator | 2025-09-20 11:09:45 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:45.583221 | orchestrator | 2025-09-20 11:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:48.623365 | orchestrator | 2025-09-20 11:09:48 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:48.624400 | orchestrator | 2025-09-20 11:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:51.662237 | orchestrator | 2025-09-20 11:09:51 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:51.662308 | orchestrator | 2025-09-20 11:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:54.706527 | orchestrator | 2025-09-20 11:09:54 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:54.706638 | orchestrator | 2025-09-20 11:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:09:57.737776 | orchestrator | 2025-09-20 11:09:57 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:09:57.737880 | orchestrator | 2025-09-20 11:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:00.789510 | orchestrator | 2025-09-20 11:10:00 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:00.789617 | orchestrator | 2025-09-20 11:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:03.830096 | orchestrator | 2025-09-20 11:10:03 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:03.830200 | orchestrator | 2025-09-20 11:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:06.872068 | orchestrator | 2025-09-20 11:10:06 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:06.872178 | orchestrator | 2025-09-20 11:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:09.918755 | orchestrator | 2025-09-20 11:10:09 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:09.918832 | orchestrator | 2025-09-20 11:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:12.962541 | orchestrator | 2025-09-20 11:10:12 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:12.962641 | orchestrator | 2025-09-20 11:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:16.011686 | orchestrator | 2025-09-20 11:10:16 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:16.011797 | orchestrator | 2025-09-20 11:10:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:19.056935 | orchestrator | 2025-09-20 11:10:19 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:19.057089 | orchestrator | 2025-09-20 11:10:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:22.100460 | orchestrator | 2025-09-20 11:10:22 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:22.100591 | orchestrator | 2025-09-20 11:10:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:25.139796 | orchestrator | 2025-09-20 11:10:25 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:25.139900 | orchestrator | 2025-09-20 11:10:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:28.181433 | orchestrator | 2025-09-20 11:10:28 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:28.181561 | orchestrator | 2025-09-20 11:10:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:31.229760 | orchestrator | 2025-09-20 11:10:31 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:31.229848 | orchestrator | 2025-09-20 11:10:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:34.279340 | orchestrator | 2025-09-20 11:10:34 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:34.279457 | orchestrator | 2025-09-20 11:10:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:37.320001 | orchestrator | 2025-09-20 11:10:37 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:37.320091 | orchestrator | 2025-09-20 11:10:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:40.364480 | orchestrator | 2025-09-20 11:10:40 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:40.364595 | orchestrator | 2025-09-20 11:10:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:43.405901 | orchestrator | 2025-09-20 11:10:43 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:43.406159 | orchestrator | 2025-09-20 11:10:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:46.441188 | orchestrator | 2025-09-20 11:10:46 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:46.441266 | orchestrator | 2025-09-20 11:10:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:49.480024 | orchestrator | 2025-09-20 11:10:49 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:49.480104 | orchestrator | 2025-09-20 11:10:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:52.519300 | orchestrator | 2025-09-20 11:10:52 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:52.519402 | orchestrator | 2025-09-20 11:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:55.563329 | orchestrator | 2025-09-20 11:10:55 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:55.563427 | orchestrator | 2025-09-20 11:10:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:10:58.609350 | orchestrator | 2025-09-20 11:10:58 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:10:58.609457 | orchestrator | 2025-09-20 11:10:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:11:01.649742 | orchestrator | 2025-09-20 11:11:01 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:11:01.649840 | orchestrator | 2025-09-20 11:11:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:11:04.691444 | orchestrator | 2025-09-20 11:11:04 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:11:04.691527 | orchestrator | 2025-09-20 11:11:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:11:07.736696 | orchestrator | 2025-09-20 11:11:07 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:11:07.736836 | orchestrator | 2025-09-20 11:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:11:10.771876 | orchestrator | 2025-09-20 11:11:10 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:11:10.772020 | orchestrator | 2025-09-20 11:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:11:13.813919 | orchestrator | 2025-09-20 11:11:13 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state STARTED 2025-09-20 11:11:13.814080 | orchestrator | 2025-09-20 11:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 11:11:16.868450 | orchestrator | 2025-09-20 11:11:16 | INFO  | Task e66b44c4-2695-4fa3-8859-2b2485afcd2b is in state SUCCESS 2025-09-20 11:11:16.869140 | orchestrator | 2025-09-20 11:11:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:16.871627 | orchestrator | 2025-09-20 11:11:16.871657 | orchestrator | 2025-09-20 11:11:16.871669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:11:16.871680 | orchestrator | 2025-09-20 11:11:16.871691 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:11:16.871703 | orchestrator | Saturday 20 September 2025 11:06:49 +0000 (0:00:00.202) 0:00:00.202 **** 2025-09-20 11:11:16.871713 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.871725 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:11:16.871735 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:11:16.871746 | orchestrator | 2025-09-20 11:11:16.871757 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:11:16.871768 | orchestrator | Saturday 20 September 2025 11:06:50 +0000 (0:00:00.222) 0:00:00.425 **** 2025-09-20 11:11:16.871792 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-20 11:11:16.871803 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-20 11:11:16.871814 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-20 11:11:16.871825 | orchestrator | 2025-09-20 11:11:16.871836 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-20 11:11:16.871847 | orchestrator | 2025-09-20 11:11:16.871858 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 11:11:16.871872 | orchestrator | Saturday 20 September 2025 11:06:50 +0000 (0:00:00.328) 0:00:00.753 **** 2025-09-20 11:11:16.871890 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:11:16.871910 | orchestrator | 2025-09-20 11:11:16.871929 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-20 11:11:16.871979 | orchestrator | Saturday 20 September 2025 11:06:50 +0000 (0:00:00.543) 0:00:01.297 **** 2025-09-20 11:11:16.871990 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-20 11:11:16.872001 | orchestrator | 2025-09-20 11:11:16.872012 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-20 11:11:16.872022 | orchestrator | Saturday 20 September 2025 11:06:54 +0000 (0:00:03.294) 0:00:04.591 **** 2025-09-20 11:11:16.872033 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-20 11:11:16.872045 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-20 11:11:16.872057 | orchestrator | 2025-09-20 11:11:16.872068 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-20 11:11:16.872079 | orchestrator | Saturday 20 September 2025 11:07:00 +0000 (0:00:06.047) 0:00:10.639 **** 2025-09-20 11:11:16.872089 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 11:11:16.872100 | orchestrator | 2025-09-20 11:11:16.872111 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-20 11:11:16.872122 | orchestrator | Saturday 20 September 2025 11:07:03 +0000 (0:00:03.191) 0:00:13.831 **** 2025-09-20 11:11:16.872132 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 11:11:16.872143 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-20 11:11:16.872154 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-20 11:11:16.872165 | orchestrator | 2025-09-20 11:11:16.872176 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-20 11:11:16.872186 | orchestrator | Saturday 20 September 2025 11:07:11 +0000 (0:00:07.747) 0:00:21.578 **** 2025-09-20 11:11:16.872197 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 11:11:16.872207 | orchestrator | 2025-09-20 11:11:16.872218 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-20 11:11:16.872229 | orchestrator | Saturday 20 September 2025 11:07:14 +0000 (0:00:03.342) 0:00:24.921 **** 2025-09-20 11:11:16.872242 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-20 11:11:16.872266 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-20 11:11:16.872278 | orchestrator | 2025-09-20 11:11:16.872291 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-20 11:11:16.872303 | orchestrator | Saturday 20 September 2025 11:07:21 +0000 (0:00:07.127) 0:00:32.049 **** 2025-09-20 11:11:16.872314 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-20 11:11:16.872326 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-20 11:11:16.872338 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-20 11:11:16.872350 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-20 11:11:16.872361 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-20 11:11:16.872374 | orchestrator | 2025-09-20 11:11:16.872385 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 11:11:16.872397 | orchestrator | Saturday 20 September 2025 11:07:36 +0000 (0:00:14.812) 0:00:46.861 **** 2025-09-20 11:11:16.872409 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:11:16.872421 | orchestrator | 2025-09-20 11:11:16.872434 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-20 11:11:16.872446 | orchestrator | Saturday 20 September 2025 11:07:36 +0000 (0:00:00.497) 0:00:47.358 **** 2025-09-20 11:11:16.872457 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.872469 | orchestrator | 2025-09-20 11:11:16.872481 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-20 11:11:16.872493 | orchestrator | Saturday 20 September 2025 11:07:41 +0000 (0:00:04.534) 0:00:51.893 **** 2025-09-20 11:11:16.872506 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.872517 | orchestrator | 2025-09-20 11:11:16.872530 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-20 11:11:16.872551 | orchestrator | Saturday 20 September 2025 11:07:45 +0000 (0:00:04.448) 0:00:56.341 **** 2025-09-20 11:11:16.872562 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.872573 | orchestrator | 2025-09-20 11:11:16.872584 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-20 11:11:16.872596 | orchestrator | Saturday 20 September 2025 11:07:48 +0000 (0:00:03.012) 0:00:59.354 **** 2025-09-20 11:11:16.872606 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-20 11:11:16.872618 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-20 11:11:16.872629 | orchestrator | 2025-09-20 11:11:16.872640 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-20 11:11:16.872651 | orchestrator | Saturday 20 September 2025 11:07:58 +0000 (0:00:09.884) 0:01:09.238 **** 2025-09-20 11:11:16.872668 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-20 11:11:16.872679 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-20 11:11:16.872693 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-20 11:11:16.872704 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-20 11:11:16.872715 | orchestrator | 2025-09-20 11:11:16.872726 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-20 11:11:16.872737 | orchestrator | Saturday 20 September 2025 11:08:14 +0000 (0:00:15.863) 0:01:25.102 **** 2025-09-20 11:11:16.872748 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.872759 | orchestrator | 2025-09-20 11:11:16.872770 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-20 11:11:16.872787 | orchestrator | Saturday 20 September 2025 11:08:19 +0000 (0:00:04.467) 0:01:29.569 **** 2025-09-20 11:11:16.872798 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.872808 | orchestrator | 2025-09-20 11:11:16.872819 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-20 11:11:16.872830 | orchestrator | Saturday 20 September 2025 11:08:24 +0000 (0:00:05.229) 0:01:34.799 **** 2025-09-20 11:11:16.872841 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.872852 | orchestrator | 2025-09-20 11:11:16.872863 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-20 11:11:16.872874 | orchestrator | Saturday 20 September 2025 11:08:24 +0000 (0:00:00.479) 0:01:35.278 **** 2025-09-20 11:11:16.872885 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.872896 | orchestrator | 2025-09-20 11:11:16.872907 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 11:11:16.872918 | orchestrator | Saturday 20 September 2025 11:08:29 +0000 (0:00:04.983) 0:01:40.262 **** 2025-09-20 11:11:16.872929 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:11:16.872968 | orchestrator | 2025-09-20 11:11:16.872980 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-20 11:11:16.872991 | orchestrator | Saturday 20 September 2025 11:08:30 +0000 (0:00:01.022) 0:01:41.285 **** 2025-09-20 11:11:16.873002 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873013 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873024 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873035 | orchestrator | 2025-09-20 11:11:16.873046 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-20 11:11:16.873056 | orchestrator | Saturday 20 September 2025 11:08:36 +0000 (0:00:05.216) 0:01:46.502 **** 2025-09-20 11:11:16.873067 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873078 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873089 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873100 | orchestrator | 2025-09-20 11:11:16.873111 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-20 11:11:16.873121 | orchestrator | Saturday 20 September 2025 11:08:40 +0000 (0:00:03.892) 0:01:50.395 **** 2025-09-20 11:11:16.873132 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873143 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873153 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873164 | orchestrator | 2025-09-20 11:11:16.873175 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-20 11:11:16.873186 | orchestrator | Saturday 20 September 2025 11:08:40 +0000 (0:00:00.812) 0:01:51.208 **** 2025-09-20 11:11:16.873196 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:11:16.873207 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873218 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:11:16.873228 | orchestrator | 2025-09-20 11:11:16.873239 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-20 11:11:16.873250 | orchestrator | Saturday 20 September 2025 11:08:42 +0000 (0:00:01.889) 0:01:53.097 **** 2025-09-20 11:11:16.873261 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873272 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873282 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873293 | orchestrator | 2025-09-20 11:11:16.873304 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-20 11:11:16.873315 | orchestrator | Saturday 20 September 2025 11:08:43 +0000 (0:00:01.186) 0:01:54.284 **** 2025-09-20 11:11:16.873325 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873336 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873347 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873357 | orchestrator | 2025-09-20 11:11:16.873368 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-20 11:11:16.873379 | orchestrator | Saturday 20 September 2025 11:08:45 +0000 (0:00:01.129) 0:01:55.413 **** 2025-09-20 11:11:16.873396 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873407 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873419 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873429 | orchestrator | 2025-09-20 11:11:16.873484 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-20 11:11:16.873497 | orchestrator | Saturday 20 September 2025 11:08:47 +0000 (0:00:02.355) 0:01:57.769 **** 2025-09-20 11:11:16.873508 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.873519 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.873530 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.873540 | orchestrator | 2025-09-20 11:11:16.873552 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-20 11:11:16.873562 | orchestrator | Saturday 20 September 2025 11:08:48 +0000 (0:00:01.465) 0:01:59.234 **** 2025-09-20 11:11:16.873573 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873584 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:11:16.873595 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:11:16.873606 | orchestrator | 2025-09-20 11:11:16.873622 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-20 11:11:16.873633 | orchestrator | Saturday 20 September 2025 11:08:49 +0000 (0:00:00.769) 0:02:00.003 **** 2025-09-20 11:11:16.873644 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873655 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:11:16.873666 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:11:16.873677 | orchestrator | 2025-09-20 11:11:16.873688 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 11:11:16.873699 | orchestrator | Saturday 20 September 2025 11:08:52 +0000 (0:00:02.556) 0:02:02.560 **** 2025-09-20 11:11:16.873710 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:11:16.873721 | orchestrator | 2025-09-20 11:11:16.873732 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-20 11:11:16.873743 | orchestrator | Saturday 20 September 2025 11:08:52 +0000 (0:00:00.466) 0:02:03.026 **** 2025-09-20 11:11:16.873754 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873764 | orchestrator | 2025-09-20 11:11:16.873775 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-20 11:11:16.873786 | orchestrator | Saturday 20 September 2025 11:08:56 +0000 (0:00:03.543) 0:02:06.570 **** 2025-09-20 11:11:16.873797 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873808 | orchestrator | 2025-09-20 11:11:16.873819 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-20 11:11:16.873830 | orchestrator | Saturday 20 September 2025 11:08:59 +0000 (0:00:02.906) 0:02:09.476 **** 2025-09-20 11:11:16.873840 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-20 11:11:16.873851 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-20 11:11:16.873862 | orchestrator | 2025-09-20 11:11:16.873873 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-20 11:11:16.873884 | orchestrator | Saturday 20 September 2025 11:09:05 +0000 (0:00:06.596) 0:02:16.072 **** 2025-09-20 11:11:16.873894 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873905 | orchestrator | 2025-09-20 11:11:16.873916 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-20 11:11:16.873927 | orchestrator | Saturday 20 September 2025 11:09:08 +0000 (0:00:03.228) 0:02:19.300 **** 2025-09-20 11:11:16.873957 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:11:16.873968 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:11:16.873979 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:11:16.873990 | orchestrator | 2025-09-20 11:11:16.874001 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-20 11:11:16.874012 | orchestrator | Saturday 20 September 2025 11:09:09 +0000 (0:00:00.297) 0:02:19.598 **** 2025-09-20 11:11:16.874074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.874128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.874148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.874160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.874172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.874184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.874203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.874346 | orchestrator | 2025-09-20 11:11:16.874357 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-20 11:11:16.874369 | orchestrator | Saturday 20 September 2025 11:09:11 +0000 (0:00:02.345) 0:02:21.943 **** 2025-09-20 11:11:16.874380 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.874391 | orchestrator | 2025-09-20 11:11:16.874426 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-20 11:11:16.874439 | orchestrator | Saturday 20 September 2025 11:09:11 +0000 (0:00:00.132) 0:02:22.076 **** 2025-09-20 11:11:16.874450 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.874461 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:11:16.874471 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:11:16.874482 | orchestrator | 2025-09-20 11:11:16.874493 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-20 11:11:16.874504 | orchestrator | Saturday 20 September 2025 11:09:12 +0000 (0:00:00.520) 0:02:22.596 **** 2025-09-20 11:11:16.874520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.874533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.874550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.874562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.874573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.874585 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.874633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.874648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.874659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.874676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.874687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.874698 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:11:16.874710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.874748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.874765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.874777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.874795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.874806 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:11:16.874817 | orchestrator | 2025-09-20 11:11:16.874828 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 11:11:16.874840 | orchestrator | Saturday 20 September 2025 11:09:12 +0000 (0:00:00.723) 0:02:23.320 **** 2025-09-20 11:11:16.874851 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:11:16.874862 | orchestrator | 2025-09-20 11:11:16.874873 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-20 11:11:16.874884 | orchestrator | Saturday 20 September 2025 11:09:13 +0000 (0:00:00.588) 0:02:23.908 **** 2025-09-20 11:11:16.874899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.874989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.875012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.875031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.875043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.875054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.875065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875197 | orchestrator | 2025-09-20 11:11:16.875208 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-20 11:11:16.875219 | orchestrator | Saturday 20 September 2025 11:09:18 +0000 (0:00:05.188) 0:02:29.097 **** 2025-09-20 11:11:16.875235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.875252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.875264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.875298 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.875315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.875338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.875349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.875383 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:11:16.875395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.875411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.875433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.875468 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:11:16.875479 | orchestrator | 2025-09-20 11:11:16.875490 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-20 11:11:16.875502 | orchestrator | Saturday 20 September 2025 11:09:19 +0000 (0:00:00.926) 0:02:30.023 **** 2025-09-20 11:11:16.875513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.875523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.875534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.875583 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.875593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.875603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.875614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.875655 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:11:16.875668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 11:11:16.875679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 11:11:16.875689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 11:11:16.875709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 11:11:16.875724 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:11:16.875734 | orchestrator | 2025-09-20 11:11:16.875744 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-20 11:11:16.875754 | orchestrator | Saturday 20 September 2025 11:09:20 +0000 (0:00:00.890) 0:02:30.914 **** 2025-09-20 11:11:16.875774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.875785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.875795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.875805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.875815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.875834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.875852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.875974 | orchestrator | 2025-09-20 11:11:16.875985 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-20 11:11:16.875995 | orchestrator | Saturday 20 September 2025 11:09:25 +0000 (0:00:05.044) 0:02:35.959 **** 2025-09-20 11:11:16.876005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-20 11:11:16.876015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-20 11:11:16.876025 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-20 11:11:16.876035 | orchestrator | 2025-09-20 11:11:16.876045 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-20 11:11:16.876054 | orchestrator | Saturday 20 September 2025 11:09:27 +0000 (0:00:02.075) 0:02:38.034 **** 2025-09-20 11:11:16.876065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.876080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.876102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.876113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.876123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.876134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.876144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876257 | orchestrator | 2025-09-20 11:11:16.876267 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-20 11:11:16.876277 | orchestrator | Saturday 20 September 2025 11:09:43 +0000 (0:00:15.689) 0:02:53.724 **** 2025-09-20 11:11:16.876290 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.876307 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.876323 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.876341 | orchestrator | 2025-09-20 11:11:16.876357 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-20 11:11:16.876371 | orchestrator | Saturday 20 September 2025 11:09:44 +0000 (0:00:01.555) 0:02:55.280 **** 2025-09-20 11:11:16.876381 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876391 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876405 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876415 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876425 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876435 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876444 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876454 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876463 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876473 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876487 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876497 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876507 | orchestrator | 2025-09-20 11:11:16.876516 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-20 11:11:16.876526 | orchestrator | Saturday 20 September 2025 11:09:49 +0000 (0:00:04.946) 0:03:00.226 **** 2025-09-20 11:11:16.876536 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876545 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876555 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876564 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876574 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876584 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876593 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876612 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876622 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876632 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876641 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876650 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876660 | orchestrator | 2025-09-20 11:11:16.876670 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-20 11:11:16.876680 | orchestrator | Saturday 20 September 2025 11:09:54 +0000 (0:00:04.916) 0:03:05.143 **** 2025-09-20 11:11:16.876689 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876699 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876708 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-20 11:11:16.876718 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876728 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876737 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-20 11:11:16.876747 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876756 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876766 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-20 11:11:16.876775 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876784 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876794 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-20 11:11:16.876803 | orchestrator | 2025-09-20 11:11:16.876813 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-20 11:11:16.876823 | orchestrator | Saturday 20 September 2025 11:09:59 +0000 (0:00:05.108) 0:03:10.251 **** 2025-09-20 11:11:16.876833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.876850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.876870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 11:11:16.876881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.876891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.876901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 11:11:16.876911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.876992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 11:11:16.877002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.877012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.877029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 11:11:16.877044 | orchestrator | 2025-09-20 11:11:16.877054 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 11:11:16.877064 | orchestrator | Saturday 20 September 2025 11:10:03 +0000 (0:00:03.529) 0:03:13.781 **** 2025-09-20 11:11:16.877074 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:11:16.877084 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:11:16.877094 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:11:16.877103 | orchestrator | 2025-09-20 11:11:16.877113 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-20 11:11:16.877126 | orchestrator | Saturday 20 September 2025 11:10:03 +0000 (0:00:00.280) 0:03:14.062 **** 2025-09-20 11:11:16.877136 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877145 | orchestrator | 2025-09-20 11:11:16.877155 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-20 11:11:16.877165 | orchestrator | Saturday 20 September 2025 11:10:05 +0000 (0:00:01.917) 0:03:15.979 **** 2025-09-20 11:11:16.877174 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877184 | orchestrator | 2025-09-20 11:11:16.877194 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-20 11:11:16.877203 | orchestrator | Saturday 20 September 2025 11:10:07 +0000 (0:00:02.026) 0:03:18.005 **** 2025-09-20 11:11:16.877213 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877222 | orchestrator | 2025-09-20 11:11:16.877232 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-20 11:11:16.877242 | orchestrator | Saturday 20 September 2025 11:10:09 +0000 (0:00:02.017) 0:03:20.023 **** 2025-09-20 11:11:16.877251 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877261 | orchestrator | 2025-09-20 11:11:16.877271 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-20 11:11:16.877280 | orchestrator | Saturday 20 September 2025 11:10:11 +0000 (0:00:02.008) 0:03:22.032 **** 2025-09-20 11:11:16.877290 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877300 | orchestrator | 2025-09-20 11:11:16.877309 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-20 11:11:16.877319 | orchestrator | Saturday 20 September 2025 11:10:31 +0000 (0:00:19.922) 0:03:41.954 **** 2025-09-20 11:11:16.877329 | orchestrator | 2025-09-20 11:11:16.877338 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-20 11:11:16.877348 | orchestrator | Saturday 20 September 2025 11:10:31 +0000 (0:00:00.070) 0:03:42.024 **** 2025-09-20 11:11:16.877358 | orchestrator | 2025-09-20 11:11:16.877367 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-20 11:11:16.877377 | orchestrator | Saturday 20 September 2025 11:10:31 +0000 (0:00:00.069) 0:03:42.093 **** 2025-09-20 11:11:16.877387 | orchestrator | 2025-09-20 11:11:16.877396 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-20 11:11:16.877406 | orchestrator | Saturday 20 September 2025 11:10:31 +0000 (0:00:00.065) 0:03:42.159 **** 2025-09-20 11:11:16.877416 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877426 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.877435 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.877445 | orchestrator | 2025-09-20 11:11:16.877455 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-20 11:11:16.877464 | orchestrator | Saturday 20 September 2025 11:10:43 +0000 (0:00:11.291) 0:03:53.450 **** 2025-09-20 11:11:16.877474 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877484 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.877493 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.877503 | orchestrator | 2025-09-20 11:11:16.877512 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-20 11:11:16.877522 | orchestrator | Saturday 20 September 2025 11:10:49 +0000 (0:00:06.057) 0:03:59.508 **** 2025-09-20 11:11:16.877532 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877542 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.877556 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.877566 | orchestrator | 2025-09-20 11:11:16.877576 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-20 11:11:16.877585 | orchestrator | Saturday 20 September 2025 11:10:59 +0000 (0:00:10.072) 0:04:09.581 **** 2025-09-20 11:11:16.877595 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.877605 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.877614 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877624 | orchestrator | 2025-09-20 11:11:16.877633 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-20 11:11:16.877643 | orchestrator | Saturday 20 September 2025 11:11:07 +0000 (0:00:08.251) 0:04:17.832 **** 2025-09-20 11:11:16.877653 | orchestrator | changed: [testbed-node-1] 2025-09-20 11:11:16.877662 | orchestrator | changed: [testbed-node-2] 2025-09-20 11:11:16.877672 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:11:16.877681 | orchestrator | 2025-09-20 11:11:16.877691 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:11:16.877701 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 11:11:16.877711 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 11:11:16.877721 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 11:11:16.877730 | orchestrator | 2025-09-20 11:11:16.877740 | orchestrator | 2025-09-20 11:11:16.877750 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:11:16.877759 | orchestrator | Saturday 20 September 2025 11:11:15 +0000 (0:00:08.016) 0:04:25.849 **** 2025-09-20 11:11:16.877774 | orchestrator | =============================================================================== 2025-09-20 11:11:16.877791 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.92s 2025-09-20 11:11:16.877809 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.86s 2025-09-20 11:11:16.877826 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.69s 2025-09-20 11:11:16.877842 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.81s 2025-09-20 11:11:16.877857 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.29s 2025-09-20 11:11:16.877872 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.07s 2025-09-20 11:11:16.877895 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.88s 2025-09-20 11:11:16.877912 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.25s 2025-09-20 11:11:16.877947 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.02s 2025-09-20 11:11:16.877959 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.75s 2025-09-20 11:11:16.877969 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.13s 2025-09-20 11:11:16.877978 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.60s 2025-09-20 11:11:16.877988 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.06s 2025-09-20 11:11:16.877998 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.05s 2025-09-20 11:11:16.878007 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.23s 2025-09-20 11:11:16.878042 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.22s 2025-09-20 11:11:16.878054 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.19s 2025-09-20 11:11:16.878064 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.11s 2025-09-20 11:11:16.878073 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.04s 2025-09-20 11:11:16.878091 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 4.98s 2025-09-20 11:11:19.908862 | orchestrator | 2025-09-20 11:11:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:22.947548 | orchestrator | 2025-09-20 11:11:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:25.992579 | orchestrator | 2025-09-20 11:11:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:29.032685 | orchestrator | 2025-09-20 11:11:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:32.072477 | orchestrator | 2025-09-20 11:11:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:35.120430 | orchestrator | 2025-09-20 11:11:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:38.161017 | orchestrator | 2025-09-20 11:11:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:41.201915 | orchestrator | 2025-09-20 11:11:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:44.236566 | orchestrator | 2025-09-20 11:11:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:47.282494 | orchestrator | 2025-09-20 11:11:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:50.324224 | orchestrator | 2025-09-20 11:11:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:53.361113 | orchestrator | 2025-09-20 11:11:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:56.416180 | orchestrator | 2025-09-20 11:11:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:11:59.458981 | orchestrator | 2025-09-20 11:11:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:12:02.496194 | orchestrator | 2025-09-20 11:12:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:12:05.538583 | orchestrator | 2025-09-20 11:12:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:12:08.580661 | orchestrator | 2025-09-20 11:12:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:12:11.618062 | orchestrator | 2025-09-20 11:12:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:12:14.664550 | orchestrator | 2025-09-20 11:12:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 11:12:17.714462 | orchestrator | 2025-09-20 11:12:18.057681 | orchestrator | 2025-09-20 11:12:18.063560 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Sep 20 11:12:18 UTC 2025 2025-09-20 11:12:18.063638 | orchestrator | 2025-09-20 11:12:18.348100 | orchestrator | ok: Runtime: 0:32:10.457691 2025-09-20 11:12:18.605461 | 2025-09-20 11:12:18.605614 | TASK [Bootstrap services] 2025-09-20 11:12:19.384092 | orchestrator | 2025-09-20 11:12:19.384403 | orchestrator | # BOOTSTRAP 2025-09-20 11:12:19.384430 | orchestrator | 2025-09-20 11:12:19.384445 | orchestrator | + set -e 2025-09-20 11:12:19.384459 | orchestrator | + echo 2025-09-20 11:12:19.384472 | orchestrator | + echo '# BOOTSTRAP' 2025-09-20 11:12:19.384490 | orchestrator | + echo 2025-09-20 11:12:19.384536 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-20 11:12:19.394690 | orchestrator | + set -e 2025-09-20 11:12:19.394748 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-20 11:12:24.133774 | orchestrator | 2025-09-20 11:12:24 | INFO  | It takes a moment until task 6059999d-317c-435a-874f-844ce282c88e (flavor-manager) has been started and output is visible here. 2025-09-20 11:12:31.532326 | orchestrator | 2025-09-20 11:12:27 | INFO  | Flavor SCS-1L-1 created 2025-09-20 11:12:31.532467 | orchestrator | 2025-09-20 11:12:27 | INFO  | Flavor SCS-1L-1-5 created 2025-09-20 11:12:31.532485 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-1V-2 created 2025-09-20 11:12:31.532498 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-1V-2-5 created 2025-09-20 11:12:31.532509 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-1V-4 created 2025-09-20 11:12:31.532520 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-1V-4-10 created 2025-09-20 11:12:31.532531 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-1V-8 created 2025-09-20 11:12:31.532543 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-1V-8-20 created 2025-09-20 11:12:31.532569 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-2V-4 created 2025-09-20 11:12:31.532580 | orchestrator | 2025-09-20 11:12:28 | INFO  | Flavor SCS-2V-4-10 created 2025-09-20 11:12:31.532591 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-2V-8 created 2025-09-20 11:12:31.532602 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-2V-8-20 created 2025-09-20 11:12:31.532613 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-2V-16 created 2025-09-20 11:12:31.532624 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-2V-16-50 created 2025-09-20 11:12:31.532635 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-4V-8 created 2025-09-20 11:12:31.532645 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-4V-8-20 created 2025-09-20 11:12:31.532656 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-4V-16 created 2025-09-20 11:12:31.532667 | orchestrator | 2025-09-20 11:12:29 | INFO  | Flavor SCS-4V-16-50 created 2025-09-20 11:12:31.532678 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-4V-32 created 2025-09-20 11:12:31.532689 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-4V-32-100 created 2025-09-20 11:12:31.532700 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-8V-16 created 2025-09-20 11:12:31.532710 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-8V-16-50 created 2025-09-20 11:12:31.532722 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-8V-32 created 2025-09-20 11:12:31.532733 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-8V-32-100 created 2025-09-20 11:12:31.532743 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-16V-32 created 2025-09-20 11:12:31.532754 | orchestrator | 2025-09-20 11:12:30 | INFO  | Flavor SCS-16V-32-100 created 2025-09-20 11:12:31.532765 | orchestrator | 2025-09-20 11:12:31 | INFO  | Flavor SCS-2V-4-20s created 2025-09-20 11:12:31.532776 | orchestrator | 2025-09-20 11:12:31 | INFO  | Flavor SCS-4V-8-50s created 2025-09-20 11:12:31.532787 | orchestrator | 2025-09-20 11:12:31 | INFO  | Flavor SCS-8V-32-100s created 2025-09-20 11:12:33.479649 | orchestrator | 2025-09-20 11:12:33 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-20 11:12:43.751872 | orchestrator | 2025-09-20 11:12:43 | INFO  | Task 0c25ccbf-82bb-45d6-9bbc-a99c42147f43 (bootstrap-basic) was prepared for execution. 2025-09-20 11:12:43.752011 | orchestrator | 2025-09-20 11:12:43 | INFO  | It takes a moment until task 0c25ccbf-82bb-45d6-9bbc-a99c42147f43 (bootstrap-basic) has been started and output is visible here. 2025-09-20 11:13:41.342832 | orchestrator | 2025-09-20 11:13:41.343011 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-20 11:13:41.343030 | orchestrator | 2025-09-20 11:13:41.343042 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 11:13:41.343053 | orchestrator | Saturday 20 September 2025 11:12:47 +0000 (0:00:00.077) 0:00:00.077 **** 2025-09-20 11:13:41.343065 | orchestrator | ok: [localhost] 2025-09-20 11:13:41.343076 | orchestrator | 2025-09-20 11:13:41.343088 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-20 11:13:41.343098 | orchestrator | Saturday 20 September 2025 11:12:49 +0000 (0:00:01.821) 0:00:01.898 **** 2025-09-20 11:13:41.343109 | orchestrator | ok: [localhost] 2025-09-20 11:13:41.343120 | orchestrator | 2025-09-20 11:13:41.343131 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-20 11:13:41.343142 | orchestrator | Saturday 20 September 2025 11:12:57 +0000 (0:00:08.013) 0:00:09.911 **** 2025-09-20 11:13:41.343153 | orchestrator | changed: [localhost] 2025-09-20 11:13:41.343164 | orchestrator | 2025-09-20 11:13:41.343176 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-20 11:13:41.343187 | orchestrator | Saturday 20 September 2025 11:13:05 +0000 (0:00:07.747) 0:00:17.659 **** 2025-09-20 11:13:41.343198 | orchestrator | ok: [localhost] 2025-09-20 11:13:41.343209 | orchestrator | 2025-09-20 11:13:41.343219 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-20 11:13:41.343230 | orchestrator | Saturday 20 September 2025 11:13:12 +0000 (0:00:06.998) 0:00:24.658 **** 2025-09-20 11:13:41.343247 | orchestrator | changed: [localhost] 2025-09-20 11:13:41.343258 | orchestrator | 2025-09-20 11:13:41.343268 | orchestrator | TASK [Create public network] *************************************************** 2025-09-20 11:13:41.343279 | orchestrator | Saturday 20 September 2025 11:13:19 +0000 (0:00:06.443) 0:00:31.102 **** 2025-09-20 11:13:41.343290 | orchestrator | changed: [localhost] 2025-09-20 11:13:41.343300 | orchestrator | 2025-09-20 11:13:41.343311 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-20 11:13:41.343322 | orchestrator | Saturday 20 September 2025 11:13:23 +0000 (0:00:04.476) 0:00:35.578 **** 2025-09-20 11:13:41.343332 | orchestrator | changed: [localhost] 2025-09-20 11:13:41.343343 | orchestrator | 2025-09-20 11:13:41.343354 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-20 11:13:41.343377 | orchestrator | Saturday 20 September 2025 11:13:29 +0000 (0:00:06.050) 0:00:41.629 **** 2025-09-20 11:13:41.343390 | orchestrator | changed: [localhost] 2025-09-20 11:13:41.343403 | orchestrator | 2025-09-20 11:13:41.343415 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-20 11:13:41.343428 | orchestrator | Saturday 20 September 2025 11:13:33 +0000 (0:00:04.164) 0:00:45.794 **** 2025-09-20 11:13:41.343440 | orchestrator | changed: [localhost] 2025-09-20 11:13:41.343452 | orchestrator | 2025-09-20 11:13:41.343464 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-20 11:13:41.343476 | orchestrator | Saturday 20 September 2025 11:13:37 +0000 (0:00:03.839) 0:00:49.633 **** 2025-09-20 11:13:41.343488 | orchestrator | ok: [localhost] 2025-09-20 11:13:41.343501 | orchestrator | 2025-09-20 11:13:41.343513 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:13:41.343526 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:13:41.343554 | orchestrator | 2025-09-20 11:13:41.343566 | orchestrator | 2025-09-20 11:13:41.343588 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:13:41.343625 | orchestrator | Saturday 20 September 2025 11:13:41 +0000 (0:00:03.557) 0:00:53.191 **** 2025-09-20 11:13:41.343638 | orchestrator | =============================================================================== 2025-09-20 11:13:41.343650 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.01s 2025-09-20 11:13:41.343662 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.75s 2025-09-20 11:13:41.343674 | orchestrator | Get volume type local --------------------------------------------------- 7.00s 2025-09-20 11:13:41.343684 | orchestrator | Create volume type local ------------------------------------------------ 6.44s 2025-09-20 11:13:41.343695 | orchestrator | Set public network to default ------------------------------------------- 6.05s 2025-09-20 11:13:41.343706 | orchestrator | Create public network --------------------------------------------------- 4.48s 2025-09-20 11:13:41.343717 | orchestrator | Create public subnet ---------------------------------------------------- 4.16s 2025-09-20 11:13:41.343727 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.84s 2025-09-20 11:13:41.343738 | orchestrator | Create manager role ----------------------------------------------------- 3.56s 2025-09-20 11:13:41.343749 | orchestrator | Gathering Facts --------------------------------------------------------- 1.82s 2025-09-20 11:13:43.722269 | orchestrator | 2025-09-20 11:13:43 | INFO  | It takes a moment until task 46861c31-e9b0-4fda-a452-ed44563fec96 (image-manager) has been started and output is visible here. 2025-09-20 11:14:22.259880 | orchestrator | 2025-09-20 11:13:46 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-20 11:14:22.260006 | orchestrator | 2025-09-20 11:13:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-20 11:14:22.260019 | orchestrator | 2025-09-20 11:13:46 | INFO  | Importing image Cirros 0.6.2 2025-09-20 11:14:22.260025 | orchestrator | 2025-09-20 11:13:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-20 11:14:22.260032 | orchestrator | 2025-09-20 11:13:48 | INFO  | Waiting for image to leave queued state... 2025-09-20 11:14:22.260038 | orchestrator | 2025-09-20 11:13:50 | INFO  | Waiting for import to complete... 2025-09-20 11:14:22.260044 | orchestrator | 2025-09-20 11:14:00 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-20 11:14:22.260050 | orchestrator | 2025-09-20 11:14:00 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-20 11:14:22.260055 | orchestrator | 2025-09-20 11:14:00 | INFO  | Setting internal_version = 0.6.2 2025-09-20 11:14:22.260061 | orchestrator | 2025-09-20 11:14:00 | INFO  | Setting image_original_user = cirros 2025-09-20 11:14:22.260067 | orchestrator | 2025-09-20 11:14:00 | INFO  | Adding tag os:cirros 2025-09-20 11:14:22.260073 | orchestrator | 2025-09-20 11:14:01 | INFO  | Setting property architecture: x86_64 2025-09-20 11:14:22.260078 | orchestrator | 2025-09-20 11:14:01 | INFO  | Setting property hw_disk_bus: scsi 2025-09-20 11:14:22.260083 | orchestrator | 2025-09-20 11:14:01 | INFO  | Setting property hw_rng_model: virtio 2025-09-20 11:14:22.260089 | orchestrator | 2025-09-20 11:14:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-20 11:14:22.260094 | orchestrator | 2025-09-20 11:14:01 | INFO  | Setting property hw_watchdog_action: reset 2025-09-20 11:14:22.260100 | orchestrator | 2025-09-20 11:14:02 | INFO  | Setting property hypervisor_type: qemu 2025-09-20 11:14:22.260105 | orchestrator | 2025-09-20 11:14:02 | INFO  | Setting property os_distro: cirros 2025-09-20 11:14:22.260111 | orchestrator | 2025-09-20 11:14:02 | INFO  | Setting property os_purpose: minimal 2025-09-20 11:14:22.260116 | orchestrator | 2025-09-20 11:14:02 | INFO  | Setting property replace_frequency: never 2025-09-20 11:14:22.260138 | orchestrator | 2025-09-20 11:14:02 | INFO  | Setting property uuid_validity: none 2025-09-20 11:14:22.260144 | orchestrator | 2025-09-20 11:14:03 | INFO  | Setting property provided_until: none 2025-09-20 11:14:22.260155 | orchestrator | 2025-09-20 11:14:03 | INFO  | Setting property image_description: Cirros 2025-09-20 11:14:22.260164 | orchestrator | 2025-09-20 11:14:03 | INFO  | Setting property image_name: Cirros 2025-09-20 11:14:22.260170 | orchestrator | 2025-09-20 11:14:03 | INFO  | Setting property internal_version: 0.6.2 2025-09-20 11:14:22.260176 | orchestrator | 2025-09-20 11:14:03 | INFO  | Setting property image_original_user: cirros 2025-09-20 11:14:22.260181 | orchestrator | 2025-09-20 11:14:03 | INFO  | Setting property os_version: 0.6.2 2025-09-20 11:14:22.260187 | orchestrator | 2025-09-20 11:14:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-20 11:14:22.260194 | orchestrator | 2025-09-20 11:14:04 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-20 11:14:22.260199 | orchestrator | 2025-09-20 11:14:04 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-20 11:14:22.260204 | orchestrator | 2025-09-20 11:14:04 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-20 11:14:22.260210 | orchestrator | 2025-09-20 11:14:04 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-20 11:14:22.260215 | orchestrator | 2025-09-20 11:14:04 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-20 11:14:22.260221 | orchestrator | 2025-09-20 11:14:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-20 11:14:22.260226 | orchestrator | 2025-09-20 11:14:05 | INFO  | Importing image Cirros 0.6.3 2025-09-20 11:14:22.260232 | orchestrator | 2025-09-20 11:14:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-20 11:14:22.260237 | orchestrator | 2025-09-20 11:14:05 | INFO  | Waiting for image to leave queued state... 2025-09-20 11:14:22.260242 | orchestrator | 2025-09-20 11:14:07 | INFO  | Waiting for import to complete... 2025-09-20 11:14:22.260258 | orchestrator | 2025-09-20 11:14:17 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-20 11:14:22.260264 | orchestrator | 2025-09-20 11:14:18 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-20 11:14:22.260269 | orchestrator | 2025-09-20 11:14:18 | INFO  | Setting internal_version = 0.6.3 2025-09-20 11:14:22.260275 | orchestrator | 2025-09-20 11:14:18 | INFO  | Setting image_original_user = cirros 2025-09-20 11:14:22.260280 | orchestrator | 2025-09-20 11:14:18 | INFO  | Adding tag os:cirros 2025-09-20 11:14:22.260285 | orchestrator | 2025-09-20 11:14:18 | INFO  | Setting property architecture: x86_64 2025-09-20 11:14:22.260290 | orchestrator | 2025-09-20 11:14:18 | INFO  | Setting property hw_disk_bus: scsi 2025-09-20 11:14:22.260296 | orchestrator | 2025-09-20 11:14:18 | INFO  | Setting property hw_rng_model: virtio 2025-09-20 11:14:22.260301 | orchestrator | 2025-09-20 11:14:18 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-20 11:14:22.260307 | orchestrator | 2025-09-20 11:14:19 | INFO  | Setting property hw_watchdog_action: reset 2025-09-20 11:14:22.260312 | orchestrator | 2025-09-20 11:14:19 | INFO  | Setting property hypervisor_type: qemu 2025-09-20 11:14:22.260317 | orchestrator | 2025-09-20 11:14:19 | INFO  | Setting property os_distro: cirros 2025-09-20 11:14:22.260327 | orchestrator | 2025-09-20 11:14:19 | INFO  | Setting property os_purpose: minimal 2025-09-20 11:14:22.260332 | orchestrator | 2025-09-20 11:14:19 | INFO  | Setting property replace_frequency: never 2025-09-20 11:14:22.260338 | orchestrator | 2025-09-20 11:14:19 | INFO  | Setting property uuid_validity: none 2025-09-20 11:14:22.260343 | orchestrator | 2025-09-20 11:14:20 | INFO  | Setting property provided_until: none 2025-09-20 11:14:22.260348 | orchestrator | 2025-09-20 11:14:20 | INFO  | Setting property image_description: Cirros 2025-09-20 11:14:22.260353 | orchestrator | 2025-09-20 11:14:20 | INFO  | Setting property image_name: Cirros 2025-09-20 11:14:22.260359 | orchestrator | 2025-09-20 11:14:20 | INFO  | Setting property internal_version: 0.6.3 2025-09-20 11:14:22.260364 | orchestrator | 2025-09-20 11:14:20 | INFO  | Setting property image_original_user: cirros 2025-09-20 11:14:22.260369 | orchestrator | 2025-09-20 11:14:21 | INFO  | Setting property os_version: 0.6.3 2025-09-20 11:14:22.260375 | orchestrator | 2025-09-20 11:14:21 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-20 11:14:22.260380 | orchestrator | 2025-09-20 11:14:21 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-20 11:14:22.260389 | orchestrator | 2025-09-20 11:14:21 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-20 11:14:22.260395 | orchestrator | 2025-09-20 11:14:21 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-20 11:14:22.260400 | orchestrator | 2025-09-20 11:14:21 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-20 11:14:22.468126 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-20 11:14:24.328679 | orchestrator | 2025-09-20 11:14:24 | INFO  | date: 2025-09-20 2025-09-20 11:14:24.328753 | orchestrator | 2025-09-20 11:14:24 | INFO  | image: octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 11:14:24.328763 | orchestrator | 2025-09-20 11:14:24 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 11:14:24.328785 | orchestrator | 2025-09-20 11:14:24 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2.CHECKSUM 2025-09-20 11:14:24.378823 | orchestrator | 2025-09-20 11:14:24 | INFO  | checksum: 7aa651a260e9466be0acf504c7661d95d1bb238dfd89417d18ec15594b635f43 2025-09-20 11:14:24.435726 | orchestrator | 2025-09-20 11:14:24 | INFO  | It takes a moment until task 08e7c04b-fd0c-48f4-954d-13828d9fc2b5 (image-manager) has been started and output is visible here. 2025-09-20 11:15:25.175706 | orchestrator | 2025-09-20 11:14:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 11:15:25.175822 | orchestrator | 2025-09-20 11:14:26 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2: 200 2025-09-20 11:15:25.175844 | orchestrator | 2025-09-20 11:14:26 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-20 2025-09-20 11:15:25.175857 | orchestrator | 2025-09-20 11:14:26 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 11:15:25.175870 | orchestrator | 2025-09-20 11:14:27 | INFO  | Waiting for image to leave queued state... 2025-09-20 11:15:25.175881 | orchestrator | 2025-09-20 11:14:29 | INFO  | Waiting for import to complete... 2025-09-20 11:15:25.175916 | orchestrator | 2025-09-20 11:14:39 | INFO  | Waiting for import to complete... 2025-09-20 11:15:25.175928 | orchestrator | 2025-09-20 11:14:50 | INFO  | Waiting for import to complete... 2025-09-20 11:15:25.175939 | orchestrator | 2025-09-20 11:15:00 | INFO  | Waiting for import to complete... 2025-09-20 11:15:25.175950 | orchestrator | 2025-09-20 11:15:10 | INFO  | Waiting for import to complete... 2025-09-20 11:15:25.175961 | orchestrator | 2025-09-20 11:15:20 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-20' successfully completed, reloading images 2025-09-20 11:15:25.175972 | orchestrator | 2025-09-20 11:15:20 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 11:15:25.175983 | orchestrator | 2025-09-20 11:15:20 | INFO  | Setting internal_version = 2025-09-20 2025-09-20 11:15:25.175994 | orchestrator | 2025-09-20 11:15:20 | INFO  | Setting image_original_user = ubuntu 2025-09-20 11:15:25.176005 | orchestrator | 2025-09-20 11:15:20 | INFO  | Adding tag amphora 2025-09-20 11:15:25.176016 | orchestrator | 2025-09-20 11:15:21 | INFO  | Adding tag os:ubuntu 2025-09-20 11:15:25.176027 | orchestrator | 2025-09-20 11:15:21 | INFO  | Setting property architecture: x86_64 2025-09-20 11:15:25.176089 | orchestrator | 2025-09-20 11:15:21 | INFO  | Setting property hw_disk_bus: scsi 2025-09-20 11:15:25.176101 | orchestrator | 2025-09-20 11:15:21 | INFO  | Setting property hw_rng_model: virtio 2025-09-20 11:15:25.176112 | orchestrator | 2025-09-20 11:15:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-20 11:15:25.176140 | orchestrator | 2025-09-20 11:15:22 | INFO  | Setting property hw_watchdog_action: reset 2025-09-20 11:15:25.176151 | orchestrator | 2025-09-20 11:15:22 | INFO  | Setting property hypervisor_type: qemu 2025-09-20 11:15:25.176162 | orchestrator | 2025-09-20 11:15:22 | INFO  | Setting property os_distro: ubuntu 2025-09-20 11:15:25.176172 | orchestrator | 2025-09-20 11:15:22 | INFO  | Setting property replace_frequency: quarterly 2025-09-20 11:15:25.176183 | orchestrator | 2025-09-20 11:15:22 | INFO  | Setting property uuid_validity: last-1 2025-09-20 11:15:25.176194 | orchestrator | 2025-09-20 11:15:22 | INFO  | Setting property provided_until: none 2025-09-20 11:15:25.176204 | orchestrator | 2025-09-20 11:15:23 | INFO  | Setting property os_purpose: network 2025-09-20 11:15:25.176215 | orchestrator | 2025-09-20 11:15:23 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-20 11:15:25.176226 | orchestrator | 2025-09-20 11:15:23 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-20 11:15:25.176237 | orchestrator | 2025-09-20 11:15:23 | INFO  | Setting property internal_version: 2025-09-20 2025-09-20 11:15:25.176249 | orchestrator | 2025-09-20 11:15:24 | INFO  | Setting property image_original_user: ubuntu 2025-09-20 11:15:25.176261 | orchestrator | 2025-09-20 11:15:24 | INFO  | Setting property os_version: 2025-09-20 2025-09-20 11:15:25.176274 | orchestrator | 2025-09-20 11:15:24 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 11:15:25.176287 | orchestrator | 2025-09-20 11:15:24 | INFO  | Setting property image_build_date: 2025-09-20 2025-09-20 11:15:25.176299 | orchestrator | 2025-09-20 11:15:24 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 11:15:25.176311 | orchestrator | 2025-09-20 11:15:24 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 11:15:25.176349 | orchestrator | 2025-09-20 11:15:25 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-20 11:15:25.176362 | orchestrator | 2025-09-20 11:15:25 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-20 11:15:25.176375 | orchestrator | 2025-09-20 11:15:25 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-20 11:15:25.176388 | orchestrator | 2025-09-20 11:15:25 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-20 11:15:25.777019 | orchestrator | ok: Runtime: 0:03:06.487901 2025-09-20 11:15:25.839515 | 2025-09-20 11:15:25.839644 | TASK [Run checks] 2025-09-20 11:15:26.538586 | orchestrator | + set -e 2025-09-20 11:15:26.538784 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 11:15:26.538807 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 11:15:26.538828 | orchestrator | ++ INTERACTIVE=false 2025-09-20 11:15:26.538841 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 11:15:26.538854 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 11:15:26.538868 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-20 11:15:26.539767 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-20 11:15:26.545989 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 11:15:26.546129 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 11:15:26.546147 | orchestrator | + echo 2025-09-20 11:15:26.546168 | orchestrator | 2025-09-20 11:15:26.546180 | orchestrator | # CHECK 2025-09-20 11:15:26.546191 | orchestrator | 2025-09-20 11:15:26.546214 | orchestrator | + echo '# CHECK' 2025-09-20 11:15:26.546225 | orchestrator | + echo 2025-09-20 11:15:26.546239 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 11:15:26.546991 | orchestrator | ++ semver latest 5.0.0 2025-09-20 11:15:26.611991 | orchestrator | 2025-09-20 11:15:26.612098 | orchestrator | ## Containers @ testbed-manager 2025-09-20 11:15:26.612113 | orchestrator | 2025-09-20 11:15:26.612127 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 11:15:26.612138 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 11:15:26.612149 | orchestrator | + echo 2025-09-20 11:15:26.612161 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-20 11:15:26.612172 | orchestrator | + echo 2025-09-20 11:15:26.612183 | orchestrator | + osism container testbed-manager ps 2025-09-20 11:15:28.959885 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 11:15:28.960020 | orchestrator | 4cf10047f2d5 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-09-20 11:15:28.960099 | orchestrator | 6c84cc3657b9 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-09-20 11:15:28.960121 | orchestrator | e4e87fa53d8c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-20 11:15:28.960133 | orchestrator | 7f3a0236c87c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-20 11:15:28.960145 | orchestrator | 8c9f28a2bc92 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-09-20 11:15:28.960162 | orchestrator | 0e519b76d29c registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2025-09-20 11:15:28.960173 | orchestrator | bf53457faad4 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes cron 2025-09-20 11:15:28.960185 | orchestrator | d5242b3ff6d8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-09-20 11:15:28.960197 | orchestrator | e0aa3e169be9 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-09-20 11:15:28.960238 | orchestrator | b1a8d5c25241 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2025-09-20 11:15:28.960251 | orchestrator | 56ecbbf26b23 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2025-09-20 11:15:28.960262 | orchestrator | b02c81e053d6 registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2025-09-20 11:15:28.960273 | orchestrator | 902e7791cc93 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-20 11:15:28.960302 | orchestrator | 068a0e2137f3 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 55 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2025-09-20 11:15:28.960314 | orchestrator | dd9952c74f13 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 55 minutes ago Up 36 minutes (healthy) ceph-ansible 2025-09-20 11:15:28.960340 | orchestrator | 502a65ba4903 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 55 minutes ago Up 36 minutes (healthy) osism-kubernetes 2025-09-20 11:15:28.960359 | orchestrator | 57d8a672ed6c registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 55 minutes ago Up 36 minutes (healthy) osism-ansible 2025-09-20 11:15:28.960371 | orchestrator | ab70e07c437d registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 55 minutes ago Up 36 minutes (healthy) kolla-ansible 2025-09-20 11:15:28.960382 | orchestrator | dd1c19b4550c registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 55 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-20 11:15:28.960393 | orchestrator | 7b970dd8af36 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-20 11:15:28.960404 | orchestrator | f8e25e1cf05a registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 55 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-20 11:15:28.960416 | orchestrator | c8df6d86e6a2 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 55 minutes ago Up 36 minutes (healthy) osismclient 2025-09-20 11:15:28.960427 | orchestrator | e398ac5b27e3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-beat-1 2025-09-20 11:15:28.960438 | orchestrator | f6b3a6ec4abc registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 55 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-20 11:15:28.960460 | orchestrator | 97f986db240f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-flower-1 2025-09-20 11:15:28.963016 | orchestrator | 1389d8122516 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 55 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2025-09-20 11:15:28.963039 | orchestrator | 1707006f82df registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-openstack-1 2025-09-20 11:15:28.963074 | orchestrator | caa6d7b2b869 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-listener-1 2025-09-20 11:15:28.963086 | orchestrator | 23b32b275626 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-20 11:15:29.257790 | orchestrator | 2025-09-20 11:15:29.257898 | orchestrator | ## Images @ testbed-manager 2025-09-20 11:15:29.257914 | orchestrator | 2025-09-20 11:15:29.257926 | orchestrator | + echo 2025-09-20 11:15:29.257937 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-20 11:15:29.257949 | orchestrator | + echo 2025-09-20 11:15:29.257959 | orchestrator | + osism container testbed-manager images 2025-09-20 11:15:31.578816 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 11:15:31.578916 | orchestrator | registry.osism.tech/osism/osism-ansible latest 8e4565a8216f About an hour ago 594MB 2025-09-20 11:15:31.578931 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 3678b0970444 About an hour ago 315MB 2025-09-20 11:15:31.578943 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 bf64ce0d05f3 2 hours ago 590MB 2025-09-20 11:15:31.578954 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 80e4f11ca27c 2 hours ago 543MB 2025-09-20 11:15:31.578983 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 90e80b1f4869 2 hours ago 1.22GB 2025-09-20 11:15:31.578995 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 349210c49c4d 2 hours ago 243MB 2025-09-20 11:15:31.579006 | orchestrator | registry.osism.tech/osism/homer v25.08.1 270470b58639 8 hours ago 11.5MB 2025-09-20 11:15:31.579017 | orchestrator | registry.osism.tech/osism/cephclient reef 6eb6307c0ae7 8 hours ago 453MB 2025-09-20 11:15:31.579028 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 10 hours ago 631MB 2025-09-20 11:15:31.579039 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 10 hours ago 748MB 2025-09-20 11:15:31.579080 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 10 hours ago 320MB 2025-09-20 11:15:31.579091 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 2b0169669244 10 hours ago 459MB 2025-09-20 11:15:31.579103 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 10 hours ago 360MB 2025-09-20 11:15:31.579113 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 1dcff67faba4 10 hours ago 363MB 2025-09-20 11:15:31.579125 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 3a6ccecdfa92 10 hours ago 894MB 2025-09-20 11:15:31.579153 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 10 hours ago 412MB 2025-09-20 11:15:31.579165 | orchestrator | registry.osism.tech/osism/osism latest f7431a16d155 11 hours ago 325MB 2025-09-20 11:15:31.579175 | orchestrator | registry.osism.tech/osism/osism-frontend latest a19e06f175f5 11 hours ago 236MB 2025-09-20 11:15:31.579186 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 3 weeks ago 275MB 2025-09-20 11:15:31.579197 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 48f7ae354376 6 weeks ago 329MB 2025-09-20 11:15:31.579208 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 weeks ago 226MB 2025-09-20 11:15:31.579219 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-20 11:15:31.579230 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-20 11:15:31.579241 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-20 11:15:31.890339 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 11:15:31.891438 | orchestrator | ++ semver latest 5.0.0 2025-09-20 11:15:31.952040 | orchestrator | 2025-09-20 11:15:31.952140 | orchestrator | ## Containers @ testbed-node-0 2025-09-20 11:15:31.952155 | orchestrator | 2025-09-20 11:15:31.952167 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 11:15:31.952178 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 11:15:31.952188 | orchestrator | + echo 2025-09-20 11:15:31.952202 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-20 11:15:31.952213 | orchestrator | + echo 2025-09-20 11:15:31.952224 | orchestrator | + osism container testbed-node-0 ps 2025-09-20 11:15:34.280237 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 11:15:34.280333 | orchestrator | 917ab6450c1d registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-20 11:15:34.280349 | orchestrator | d090cf967c5e registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-20 11:15:34.280361 | orchestrator | 56b9211cd065 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-20 11:15:34.280372 | orchestrator | d60c36ed8f77 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-20 11:15:34.280383 | orchestrator | 7b74a9bba92f registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-09-20 11:15:34.280395 | orchestrator | 91e8cabe5062 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-20 11:15:34.280426 | orchestrator | 9feb4d6e5324 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-09-20 11:15:34.280438 | orchestrator | 649abf86adb6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-09-20 11:15:34.280449 | orchestrator | bd5fd6b9b4aa registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-20 11:15:34.280460 | orchestrator | ecab9645145e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-20 11:15:34.280493 | orchestrator | b44e6e437f51 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-20 11:15:34.280519 | orchestrator | 7c61c2bc1e31 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-20 11:15:34.280541 | orchestrator | 326b23a44f6a registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-09-20 11:15:34.280553 | orchestrator | 5df27943522c registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-09-20 11:15:34.280564 | orchestrator | 4f2d1c5d8b03 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-09-20 11:15:34.280575 | orchestrator | 64324d8be087 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-20 11:15:34.280586 | orchestrator | db53e86fa5b2 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-09-20 11:15:34.280597 | orchestrator | b5d60ec9e405 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-09-20 11:15:34.280608 | orchestrator | 78ac93c5a33f registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) barbican_worker 2025-09-20 11:15:34.280619 | orchestrator | d24ea892ac67 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-20 11:15:34.280630 | orchestrator | 6b32eff8dc1a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-09-20 11:15:34.280655 | orchestrator | ebea2f0acf65 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-09-20 11:15:34.280667 | orchestrator | 22390947db62 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-20 11:15:34.280678 | orchestrator | abcc01bc8507 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-09-20 11:15:34.280690 | orchestrator | 2cde59e3fde7 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-09-20 11:15:34.280706 | orchestrator | 233bada88d87 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-20 11:15:34.280721 | orchestrator | 3f90c191c64e registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-09-20 11:15:34.280732 | orchestrator | 0e82e0f6d38d registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-20 11:15:34.280743 | orchestrator | 18a8c67d24bb registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-20 11:15:34.280761 | orchestrator | c72fa9cce678 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-20 11:15:34.280772 | orchestrator | f7400a9852d8 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-09-20 11:15:34.280783 | orchestrator | 269771336afd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-09-20 11:15:34.280794 | orchestrator | 69fc58094bc8 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-09-20 11:15:34.280805 | orchestrator | d9ce065c83db registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-09-20 11:15:34.280816 | orchestrator | 07fc1bf0c8bd registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-20 11:15:34.280827 | orchestrator | 781af4addc93 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-20 11:15:34.280838 | orchestrator | c583a592e531 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2025-09-20 11:15:34.280849 | orchestrator | 7932bf1d6e98 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-20 11:15:34.280859 | orchestrator | 08e8025df1a8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-20 11:15:34.280870 | orchestrator | 4fdcaadd6b77 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2025-09-20 11:15:34.280881 | orchestrator | 7d3f3f559e31 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2025-09-20 11:15:34.280893 | orchestrator | 59157e9a93e1 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-20 11:15:34.280904 | orchestrator | 7e4b7aef6aa7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-09-20 11:15:34.280915 | orchestrator | 3971b40db581 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-09-20 11:15:34.280932 | orchestrator | 8d007496fcef registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-09-20 11:15:34.280943 | orchestrator | 6ca123df1b6f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-09-20 11:15:34.280954 | orchestrator | 58b61afea484 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-09-20 11:15:34.280965 | orchestrator | 77fab954eb81 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-20 11:15:34.280976 | orchestrator | f0389577d6c8 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-20 11:15:34.280992 | orchestrator | 4d696e8ab9e3 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2025-09-20 11:15:34.281007 | orchestrator | f212c842bf4a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-09-20 11:15:34.281019 | orchestrator | ed42ea0174a9 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-09-20 11:15:34.281030 | orchestrator | 14259997d108 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-09-20 11:15:34.281041 | orchestrator | de3dca9947d8 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-09-20 11:15:34.281071 | orchestrator | 3f05f864f73d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-09-20 11:15:34.281082 | orchestrator | 0e5a5933188b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-09-20 11:15:34.281093 | orchestrator | 2f192fd380b6 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-20 11:15:34.508076 | orchestrator | 2025-09-20 11:15:34.508158 | orchestrator | ## Images @ testbed-node-0 2025-09-20 11:15:34.508170 | orchestrator | 2025-09-20 11:15:34.508181 | orchestrator | + echo 2025-09-20 11:15:34.508191 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-20 11:15:34.508201 | orchestrator | + echo 2025-09-20 11:15:34.508211 | orchestrator | + osism container testbed-node-0 images 2025-09-20 11:15:36.623212 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 11:15:36.623393 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a56e1a509897 8 hours ago 1.27GB 2025-09-20 11:15:36.623411 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 cbd748637569 10 hours ago 331MB 2025-09-20 11:15:36.623423 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d4294af2e892 10 hours ago 328MB 2025-09-20 11:15:36.623434 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 10 hours ago 631MB 2025-09-20 11:15:36.623444 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 10 hours ago 748MB 2025-09-20 11:15:36.623455 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c2d4086fa5d2 10 hours ago 321MB 2025-09-20 11:15:36.623465 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 062917222ddf 10 hours ago 1.59GB 2025-09-20 11:15:36.623476 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 807046badeab 10 hours ago 1.56GB 2025-09-20 11:15:36.623487 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 10 hours ago 320MB 2025-09-20 11:15:36.623498 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 c5be57c51f09 10 hours ago 1.05GB 2025-09-20 11:15:36.623509 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 279ed9090156 10 hours ago 420MB 2025-09-20 11:15:36.623519 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 74bf1faee51e 10 hours ago 377MB 2025-09-20 11:15:36.623550 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4c3412419f36 10 hours ago 327MB 2025-09-20 11:15:36.623562 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d07ecf7097cd 10 hours ago 327MB 2025-09-20 11:15:36.623595 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef300c8d9258 10 hours ago 1.21GB 2025-09-20 11:15:36.623606 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 a288348ddc28 10 hours ago 593MB 2025-09-20 11:15:36.623617 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 10 hours ago 360MB 2025-09-20 11:15:36.623628 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8fdc5556ff2a 10 hours ago 356MB 2025-09-20 11:15:36.623639 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a86fe1b5d5d 10 hours ago 353MB 2025-09-20 11:15:36.623650 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 28f5d6626265 10 hours ago 347MB 2025-09-20 11:15:36.623660 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 10 hours ago 412MB 2025-09-20 11:15:36.623671 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d534700a4020 10 hours ago 364MB 2025-09-20 11:15:36.623682 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 2b2c27a921cf 10 hours ago 364MB 2025-09-20 11:15:36.623693 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 bebccfdca14f 10 hours ago 1.2GB 2025-09-20 11:15:36.623704 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 151d45e415a2 10 hours ago 1.31GB 2025-09-20 11:15:36.623714 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 90b8a0e973d1 10 hours ago 1.16GB 2025-09-20 11:15:36.623725 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 55718ea9eeb0 10 hours ago 1.11GB 2025-09-20 11:15:36.623736 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 4e25fa6e32b9 10 hours ago 1.11GB 2025-09-20 11:15:36.623747 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2c881665cc69 10 hours ago 1.04GB 2025-09-20 11:15:36.623757 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 5aea062cd9c4 10 hours ago 1.04GB 2025-09-20 11:15:36.623768 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 59d53505cfd8 10 hours ago 1.04GB 2025-09-20 11:15:36.623779 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 a7617ff32f8f 10 hours ago 1.04GB 2025-09-20 11:15:36.623790 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 7dae8c7e17e4 10 hours ago 1.04GB 2025-09-20 11:15:36.623800 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 ee60c82cc9e1 10 hours ago 1.04GB 2025-09-20 11:15:36.623811 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 b2a85ccbb20a 10 hours ago 1.04GB 2025-09-20 11:15:36.623821 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4d300102afee 10 hours ago 1.41GB 2025-09-20 11:15:36.623856 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 baa1bc8f1e13 10 hours ago 1.41GB 2025-09-20 11:15:36.623869 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 b4f55138c4ad 10 hours ago 1.1GB 2025-09-20 11:15:36.623879 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 2e361f9a2585 10 hours ago 1.12GB 2025-09-20 11:15:36.623891 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9492daf1f100 10 hours ago 1.12GB 2025-09-20 11:15:36.623901 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c0b88995f1b5 10 hours ago 1.1GB 2025-09-20 11:15:36.623912 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0297c70d41f5 10 hours ago 1.1GB 2025-09-20 11:15:36.623923 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6566218469d4 10 hours ago 1.06GB 2025-09-20 11:15:36.623934 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6297bc16ad52 10 hours ago 1.06GB 2025-09-20 11:15:36.623952 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 daa88d238be7 10 hours ago 1.06GB 2025-09-20 11:15:36.623963 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 478801741ccd 10 hours ago 1.3GB 2025-09-20 11:15:36.623973 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 304be05688f9 10 hours ago 1.3GB 2025-09-20 11:15:36.623984 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 6334b378f681 10 hours ago 1.42GB 2025-09-20 11:15:36.623995 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 3e549beb306f 10 hours ago 1.3GB 2025-09-20 11:15:36.624006 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 0cd53433cbeb 10 hours ago 1.05GB 2025-09-20 11:15:36.624016 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b4cb5c883aaa 10 hours ago 1.05GB 2025-09-20 11:15:36.624027 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 782b8fe503f7 10 hours ago 1.05GB 2025-09-20 11:15:36.624037 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 91b868db9be1 10 hours ago 1.06GB 2025-09-20 11:15:36.624048 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 c57fe4414b86 10 hours ago 1.05GB 2025-09-20 11:15:36.624089 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 88cef4128b0e 10 hours ago 1.06GB 2025-09-20 11:15:36.624100 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 286a7939de9e 10 hours ago 1.15GB 2025-09-20 11:15:36.624111 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3ed757503e46 10 hours ago 1.25GB 2025-09-20 11:15:36.624122 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ce1472f5dd7d 10 hours ago 1.12GB 2025-09-20 11:15:36.624133 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 d202b0000858 10 hours ago 1.11GB 2025-09-20 11:15:36.624143 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 63d70c74c0aa 10 hours ago 949MB 2025-09-20 11:15:36.624154 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 548b04fa8ce4 10 hours ago 949MB 2025-09-20 11:15:36.624165 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b9e7968e9914 10 hours ago 949MB 2025-09-20 11:15:36.624176 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bd01fffe8a34 10 hours ago 949MB 2025-09-20 11:15:36.838200 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 11:15:36.838568 | orchestrator | ++ semver latest 5.0.0 2025-09-20 11:15:36.886274 | orchestrator | 2025-09-20 11:15:36.886334 | orchestrator | ## Containers @ testbed-node-1 2025-09-20 11:15:36.886347 | orchestrator | 2025-09-20 11:15:36.886358 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 11:15:36.886369 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 11:15:36.886381 | orchestrator | + echo 2025-09-20 11:15:36.886392 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-20 11:15:36.886404 | orchestrator | + echo 2025-09-20 11:15:36.886415 | orchestrator | + osism container testbed-node-1 ps 2025-09-20 11:15:38.970769 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 11:15:38.970888 | orchestrator | 7dc0e9f3afa5 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-20 11:15:38.970936 | orchestrator | 37f0b3d36534 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-20 11:15:38.970949 | orchestrator | 43d15fb7d0cd registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-20 11:15:38.970985 | orchestrator | c74507a17f57 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-20 11:15:38.970996 | orchestrator | 8f6faf221403 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-09-20 11:15:38.971008 | orchestrator | c8d5f260e621 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-09-20 11:15:38.971019 | orchestrator | f361dbfd9683 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-20 11:15:38.971030 | orchestrator | 70375d3721a4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-09-20 11:15:38.971041 | orchestrator | 65c795835301 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-20 11:15:38.971052 | orchestrator | 8ccaf5d33bfb registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-20 11:15:38.971085 | orchestrator | 41eddbe9062e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-20 11:15:38.971097 | orchestrator | 1206859da20d registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-20 11:15:38.971108 | orchestrator | 88e3bd0bf187 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-09-20 11:15:38.971123 | orchestrator | ee82fe8ce6fa registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-09-20 11:15:38.971134 | orchestrator | 7ebd0f6ec107 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-09-20 11:15:38.971145 | orchestrator | 6374d9f536ef registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-09-20 11:15:38.971156 | orchestrator | c2355f41a72b registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-20 11:15:38.971167 | orchestrator | 49600d2f0fc8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-09-20 11:15:38.971179 | orchestrator | 4924114cbecb registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-20 11:15:38.971190 | orchestrator | ebc27ce4c3ba registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-20 11:15:38.971201 | orchestrator | 58a0f7a0ee82 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-09-20 11:15:38.971228 | orchestrator | 84784c02d081 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-09-20 11:15:38.971246 | orchestrator | 4f50f168b1a3 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-20 11:15:38.971265 | orchestrator | 9f32031c70c6 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-09-20 11:15:38.971278 | orchestrator | 1d6278aa7c94 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-09-20 11:15:38.971289 | orchestrator | 0ad83d9fa227 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-09-20 11:15:38.971302 | orchestrator | 899942d7698b registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-20 11:15:38.971315 | orchestrator | 66344ae2254d registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-09-20 11:15:38.971328 | orchestrator | 870b7128aa93 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-20 11:15:38.971340 | orchestrator | 84ecdfe791f6 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-20 11:15:38.971354 | orchestrator | 8b313a919682 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-20 11:15:38.971366 | orchestrator | 5a3d126ca19f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-09-20 11:15:38.971379 | orchestrator | 5f2828733796 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-09-20 11:15:38.971391 | orchestrator | 1bf82528cda8 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-09-20 11:15:38.971403 | orchestrator | 5a11e720fb84 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-20 11:15:38.971417 | orchestrator | ef814cf22c74 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-20 11:15:38.971429 | orchestrator | 58887bac0881 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-09-20 11:15:38.971441 | orchestrator | 41d3bab4e2b7 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-20 11:15:38.971454 | orchestrator | 1b6c45fb7593 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-09-20 11:15:38.971467 | orchestrator | cd62cd662912 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2025-09-20 11:15:38.971479 | orchestrator | 85c34dd17313 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-20 11:15:38.971491 | orchestrator | 1f6a21db1bd9 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-20 11:15:38.971510 | orchestrator | 257a77ab0adf registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-09-20 11:15:38.971522 | orchestrator | 2291fcd7fae5 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-09-20 11:15:38.971542 | orchestrator | 028696c4be68 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-09-20 11:15:38.971555 | orchestrator | f46744fbae89 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-09-20 11:15:38.971573 | orchestrator | 240d922c719f registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2025-09-20 11:15:38.971586 | orchestrator | f18b0a8270d7 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-09-20 11:15:38.971603 | orchestrator | 0082811a5e96 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-09-20 11:15:38.971617 | orchestrator | ea0badbd9a3d registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-20 11:15:38.971630 | orchestrator | 4e1007bef1e8 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-09-20 11:15:38.971642 | orchestrator | 5c861a54b863 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-09-20 11:15:38.971654 | orchestrator | 26d2db197ace registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-09-20 11:15:38.971665 | orchestrator | 054dd1a8c619 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-09-20 11:15:38.971675 | orchestrator | c2dcfd0e48fb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-09-20 11:15:38.971687 | orchestrator | 779ffdd40ad9 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-09-20 11:15:38.971697 | orchestrator | ab20a9798685 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-09-20 11:15:39.187569 | orchestrator | 2025-09-20 11:15:39.187664 | orchestrator | ## Images @ testbed-node-1 2025-09-20 11:15:39.187679 | orchestrator | 2025-09-20 11:15:39.187691 | orchestrator | + echo 2025-09-20 11:15:39.187702 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-20 11:15:39.187714 | orchestrator | + echo 2025-09-20 11:15:39.187726 | orchestrator | + osism container testbed-node-1 images 2025-09-20 11:15:41.350565 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 11:15:41.350682 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a56e1a509897 8 hours ago 1.27GB 2025-09-20 11:15:41.350696 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 cbd748637569 10 hours ago 331MB 2025-09-20 11:15:41.350707 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d4294af2e892 10 hours ago 328MB 2025-09-20 11:15:41.350719 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 10 hours ago 631MB 2025-09-20 11:15:41.350730 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 10 hours ago 748MB 2025-09-20 11:15:41.350792 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c2d4086fa5d2 10 hours ago 321MB 2025-09-20 11:15:41.350804 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 062917222ddf 10 hours ago 1.59GB 2025-09-20 11:15:41.350815 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 807046badeab 10 hours ago 1.56GB 2025-09-20 11:15:41.350826 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 10 hours ago 320MB 2025-09-20 11:15:41.350837 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 c5be57c51f09 10 hours ago 1.05GB 2025-09-20 11:15:41.350848 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 74bf1faee51e 10 hours ago 377MB 2025-09-20 11:15:41.350858 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 279ed9090156 10 hours ago 420MB 2025-09-20 11:15:41.350869 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4c3412419f36 10 hours ago 327MB 2025-09-20 11:15:41.350880 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d07ecf7097cd 10 hours ago 327MB 2025-09-20 11:15:41.350890 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef300c8d9258 10 hours ago 1.21GB 2025-09-20 11:15:41.350901 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 a288348ddc28 10 hours ago 593MB 2025-09-20 11:15:41.350912 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 10 hours ago 360MB 2025-09-20 11:15:41.350922 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8fdc5556ff2a 10 hours ago 356MB 2025-09-20 11:15:41.350933 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a86fe1b5d5d 10 hours ago 353MB 2025-09-20 11:15:41.350944 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 28f5d6626265 10 hours ago 347MB 2025-09-20 11:15:41.350954 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 10 hours ago 412MB 2025-09-20 11:15:41.350966 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d534700a4020 10 hours ago 364MB 2025-09-20 11:15:41.350976 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 2b2c27a921cf 10 hours ago 364MB 2025-09-20 11:15:41.350987 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 bebccfdca14f 10 hours ago 1.2GB 2025-09-20 11:15:41.350998 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 151d45e415a2 10 hours ago 1.31GB 2025-09-20 11:15:41.351026 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 90b8a0e973d1 10 hours ago 1.16GB 2025-09-20 11:15:41.351037 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 55718ea9eeb0 10 hours ago 1.11GB 2025-09-20 11:15:41.351048 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 4e25fa6e32b9 10 hours ago 1.11GB 2025-09-20 11:15:41.351079 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2c881665cc69 10 hours ago 1.04GB 2025-09-20 11:15:41.351090 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4d300102afee 10 hours ago 1.41GB 2025-09-20 11:15:41.351103 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 baa1bc8f1e13 10 hours ago 1.41GB 2025-09-20 11:15:41.351115 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 b4f55138c4ad 10 hours ago 1.1GB 2025-09-20 11:15:41.351128 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 2e361f9a2585 10 hours ago 1.12GB 2025-09-20 11:15:41.351140 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9492daf1f100 10 hours ago 1.12GB 2025-09-20 11:15:41.351160 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c0b88995f1b5 10 hours ago 1.1GB 2025-09-20 11:15:41.351172 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0297c70d41f5 10 hours ago 1.1GB 2025-09-20 11:15:41.351201 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6566218469d4 10 hours ago 1.06GB 2025-09-20 11:15:41.351216 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6297bc16ad52 10 hours ago 1.06GB 2025-09-20 11:15:41.351229 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 daa88d238be7 10 hours ago 1.06GB 2025-09-20 11:15:41.351242 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 478801741ccd 10 hours ago 1.3GB 2025-09-20 11:15:41.351255 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 304be05688f9 10 hours ago 1.3GB 2025-09-20 11:15:41.351268 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 6334b378f681 10 hours ago 1.42GB 2025-09-20 11:15:41.351280 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 3e549beb306f 10 hours ago 1.3GB 2025-09-20 11:15:41.351292 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 0cd53433cbeb 10 hours ago 1.05GB 2025-09-20 11:15:41.351304 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b4cb5c883aaa 10 hours ago 1.05GB 2025-09-20 11:15:41.351316 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 782b8fe503f7 10 hours ago 1.05GB 2025-09-20 11:15:41.351328 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 91b868db9be1 10 hours ago 1.06GB 2025-09-20 11:15:41.351340 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 c57fe4414b86 10 hours ago 1.05GB 2025-09-20 11:15:41.351353 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 88cef4128b0e 10 hours ago 1.06GB 2025-09-20 11:15:41.351366 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 286a7939de9e 10 hours ago 1.15GB 2025-09-20 11:15:41.351378 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3ed757503e46 10 hours ago 1.25GB 2025-09-20 11:15:41.351391 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 548b04fa8ce4 10 hours ago 949MB 2025-09-20 11:15:41.351403 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 63d70c74c0aa 10 hours ago 949MB 2025-09-20 11:15:41.351415 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b9e7968e9914 10 hours ago 949MB 2025-09-20 11:15:41.351428 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bd01fffe8a34 10 hours ago 949MB 2025-09-20 11:15:41.589181 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 11:15:41.589395 | orchestrator | ++ semver latest 5.0.0 2025-09-20 11:15:41.627638 | orchestrator | 2025-09-20 11:15:41.627732 | orchestrator | ## Containers @ testbed-node-2 2025-09-20 11:15:41.627745 | orchestrator | 2025-09-20 11:15:41.627752 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 11:15:41.627758 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 11:15:41.627764 | orchestrator | + echo 2025-09-20 11:15:41.627772 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-20 11:15:41.627778 | orchestrator | + echo 2025-09-20 11:15:41.627782 | orchestrator | + osism container testbed-node-2 ps 2025-09-20 11:15:43.747819 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 11:15:43.747895 | orchestrator | 41292c334898 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-20 11:15:43.747902 | orchestrator | b58f79e93574 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-20 11:15:43.747924 | orchestrator | bfcda7227813 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-20 11:15:43.747929 | orchestrator | 99a88cc4b9e9 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-20 11:15:43.747933 | orchestrator | 00ce2decc9c3 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-20 11:15:43.747938 | orchestrator | 6d660f0b7425 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-09-20 11:15:43.747943 | orchestrator | 9f00593e5c1c registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-20 11:15:43.747947 | orchestrator | 6ccd9c08bc7c registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-09-20 11:15:43.747951 | orchestrator | 5d65adc3213a registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-20 11:15:43.747968 | orchestrator | 414e45063738 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-20 11:15:43.747973 | orchestrator | 0359c4800b46 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-20 11:15:43.747978 | orchestrator | ad9b458b0092 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-20 11:15:43.747982 | orchestrator | 2856bcd8196c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-09-20 11:15:43.747987 | orchestrator | 77136b98d0d5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-09-20 11:15:43.747991 | orchestrator | 34e54ef963ed registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-09-20 11:15:43.747995 | orchestrator | 49b64798b42c registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-20 11:15:43.748000 | orchestrator | 1dc0797b8993 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-09-20 11:15:43.748004 | orchestrator | fa678e3d3ccb registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-20 11:15:43.748009 | orchestrator | a19d37519daf registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-20 11:15:43.748013 | orchestrator | 3ed8f98e6860 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-20 11:15:43.748018 | orchestrator | 49c15915e55c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-09-20 11:15:43.748031 | orchestrator | 48bc248a5097 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-09-20 11:15:43.748041 | orchestrator | cf26c5cc46bb registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-20 11:15:43.748046 | orchestrator | fece5914b0f4 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-09-20 11:15:43.748051 | orchestrator | 01da898eb55d registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-09-20 11:15:43.748056 | orchestrator | c4ab77711556 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-09-20 11:15:43.748060 | orchestrator | 69910dfbec15 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-09-20 11:15:43.748094 | orchestrator | 94e7432e9d3b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-09-20 11:15:43.748099 | orchestrator | 77d42ab034be registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-20 11:15:43.748103 | orchestrator | 6ca0fff4f018 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-20 11:15:43.748107 | orchestrator | af215cae576f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-20 11:15:43.748112 | orchestrator | ba13e2a72236 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-09-20 11:15:43.748116 | orchestrator | f1a335fea00f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-09-20 11:15:43.748121 | orchestrator | 6f4eb1e32a20 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-20 11:15:43.748125 | orchestrator | a0a01c09db31 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-20 11:15:43.748129 | orchestrator | 4fa90080e063 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-20 11:15:43.748134 | orchestrator | 22e4a16d7189 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-09-20 11:15:43.748138 | orchestrator | bfaebdc953e0 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-09-20 11:15:43.748143 | orchestrator | 3ac0d60b4493 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-09-20 11:15:43.748147 | orchestrator | 0bc7516d2a65 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2025-09-20 11:15:43.748152 | orchestrator | 2f00ed0c2999 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-20 11:15:43.748160 | orchestrator | afc1ff275532 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-20 11:15:43.748165 | orchestrator | 03e698548d4e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-09-20 11:15:43.748173 | orchestrator | f87341046562 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-09-20 11:15:43.748181 | orchestrator | e16d5147fe1e registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-09-20 11:15:43.748188 | orchestrator | e3ce9ed52a39 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-09-20 11:15:43.748193 | orchestrator | 826774b294fa registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-09-20 11:15:43.748197 | orchestrator | 8d07346dba87 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-20 11:15:43.748202 | orchestrator | 62fe8021ea63 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-09-20 11:15:43.748206 | orchestrator | ce7ee4de1620 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-20 11:15:43.748210 | orchestrator | 8d937b74857c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-09-20 11:15:43.748215 | orchestrator | 6d491f4dda04 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-09-20 11:15:43.748219 | orchestrator | 41928f12830a registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-09-20 11:15:43.748223 | orchestrator | 46be5d2037e5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-09-20 11:15:43.748228 | orchestrator | f02999827ebc registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-09-20 11:15:43.748233 | orchestrator | 7f3b067fef81 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-09-20 11:15:43.748237 | orchestrator | 9fc1f41532c2 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-09-20 11:15:43.952107 | orchestrator | 2025-09-20 11:15:43.952208 | orchestrator | ## Images @ testbed-node-2 2025-09-20 11:15:43.952225 | orchestrator | 2025-09-20 11:15:43.952237 | orchestrator | + echo 2025-09-20 11:15:43.952249 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-20 11:15:43.952261 | orchestrator | + echo 2025-09-20 11:15:43.952273 | orchestrator | + osism container testbed-node-2 images 2025-09-20 11:15:46.168593 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 11:15:46.168700 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a56e1a509897 8 hours ago 1.27GB 2025-09-20 11:15:46.168716 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 cbd748637569 10 hours ago 331MB 2025-09-20 11:15:46.168727 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d4294af2e892 10 hours ago 328MB 2025-09-20 11:15:46.168761 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 10 hours ago 631MB 2025-09-20 11:15:46.168772 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 10 hours ago 748MB 2025-09-20 11:15:46.168783 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c2d4086fa5d2 10 hours ago 321MB 2025-09-20 11:15:46.168793 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 062917222ddf 10 hours ago 1.59GB 2025-09-20 11:15:46.168804 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 807046badeab 10 hours ago 1.56GB 2025-09-20 11:15:46.168815 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 10 hours ago 320MB 2025-09-20 11:15:46.168825 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 c5be57c51f09 10 hours ago 1.05GB 2025-09-20 11:15:46.168836 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 74bf1faee51e 10 hours ago 377MB 2025-09-20 11:15:46.168846 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 279ed9090156 10 hours ago 420MB 2025-09-20 11:15:46.168857 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4c3412419f36 10 hours ago 327MB 2025-09-20 11:15:46.168868 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d07ecf7097cd 10 hours ago 327MB 2025-09-20 11:15:46.168878 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef300c8d9258 10 hours ago 1.21GB 2025-09-20 11:15:46.168889 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 a288348ddc28 10 hours ago 593MB 2025-09-20 11:15:46.168900 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8fdc5556ff2a 10 hours ago 356MB 2025-09-20 11:15:46.168910 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 10 hours ago 360MB 2025-09-20 11:15:46.168922 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a86fe1b5d5d 10 hours ago 353MB 2025-09-20 11:15:46.168943 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 28f5d6626265 10 hours ago 347MB 2025-09-20 11:15:46.168962 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 10 hours ago 412MB 2025-09-20 11:15:46.168979 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d534700a4020 10 hours ago 364MB 2025-09-20 11:15:46.168996 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 2b2c27a921cf 10 hours ago 364MB 2025-09-20 11:15:46.169015 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 bebccfdca14f 10 hours ago 1.2GB 2025-09-20 11:15:46.169034 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 151d45e415a2 10 hours ago 1.31GB 2025-09-20 11:15:46.169052 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 90b8a0e973d1 10 hours ago 1.16GB 2025-09-20 11:15:46.169095 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 55718ea9eeb0 10 hours ago 1.11GB 2025-09-20 11:15:46.169117 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 4e25fa6e32b9 10 hours ago 1.11GB 2025-09-20 11:15:46.169136 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2c881665cc69 10 hours ago 1.04GB 2025-09-20 11:15:46.169156 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4d300102afee 10 hours ago 1.41GB 2025-09-20 11:15:46.169177 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 baa1bc8f1e13 10 hours ago 1.41GB 2025-09-20 11:15:46.169197 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 b4f55138c4ad 10 hours ago 1.1GB 2025-09-20 11:15:46.169216 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 2e361f9a2585 10 hours ago 1.12GB 2025-09-20 11:15:46.169239 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9492daf1f100 10 hours ago 1.12GB 2025-09-20 11:15:46.169256 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c0b88995f1b5 10 hours ago 1.1GB 2025-09-20 11:15:46.169275 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0297c70d41f5 10 hours ago 1.1GB 2025-09-20 11:15:46.169317 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6566218469d4 10 hours ago 1.06GB 2025-09-20 11:15:46.169336 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6297bc16ad52 10 hours ago 1.06GB 2025-09-20 11:15:46.169354 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 daa88d238be7 10 hours ago 1.06GB 2025-09-20 11:15:46.169396 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 478801741ccd 10 hours ago 1.3GB 2025-09-20 11:15:46.169418 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 304be05688f9 10 hours ago 1.3GB 2025-09-20 11:15:46.169436 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 6334b378f681 10 hours ago 1.42GB 2025-09-20 11:15:46.169455 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 3e549beb306f 10 hours ago 1.3GB 2025-09-20 11:15:46.169467 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 0cd53433cbeb 10 hours ago 1.05GB 2025-09-20 11:15:46.169478 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b4cb5c883aaa 10 hours ago 1.05GB 2025-09-20 11:15:46.169488 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 782b8fe503f7 10 hours ago 1.05GB 2025-09-20 11:15:46.169499 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 91b868db9be1 10 hours ago 1.06GB 2025-09-20 11:15:46.169509 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 c57fe4414b86 10 hours ago 1.05GB 2025-09-20 11:15:46.169520 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 88cef4128b0e 10 hours ago 1.06GB 2025-09-20 11:15:46.169530 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 286a7939de9e 10 hours ago 1.15GB 2025-09-20 11:15:46.169541 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3ed757503e46 10 hours ago 1.25GB 2025-09-20 11:15:46.169551 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 548b04fa8ce4 10 hours ago 949MB 2025-09-20 11:15:46.169562 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 63d70c74c0aa 10 hours ago 949MB 2025-09-20 11:15:46.169572 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b9e7968e9914 10 hours ago 949MB 2025-09-20 11:15:46.169590 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bd01fffe8a34 10 hours ago 949MB 2025-09-20 11:15:46.471060 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-20 11:15:46.479473 | orchestrator | + set -e 2025-09-20 11:15:46.479540 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 11:15:46.480682 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 11:15:46.480705 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 11:15:46.480716 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 11:15:46.480727 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 11:15:46.480738 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 11:15:46.480750 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 11:15:46.480761 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 11:15:46.480772 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 11:15:46.480782 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 11:15:46.480793 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 11:15:46.480804 | orchestrator | ++ export ARA=false 2025-09-20 11:15:46.480815 | orchestrator | ++ ARA=false 2025-09-20 11:15:46.480842 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 11:15:46.480858 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 11:15:46.480881 | orchestrator | ++ export TEMPEST=false 2025-09-20 11:15:46.480919 | orchestrator | ++ TEMPEST=false 2025-09-20 11:15:46.480931 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 11:15:46.480941 | orchestrator | ++ IS_ZUUL=true 2025-09-20 11:15:46.480952 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 11:15:46.480963 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 11:15:46.480974 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 11:15:46.480985 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 11:15:46.480995 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 11:15:46.481006 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 11:15:46.481016 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 11:15:46.481027 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 11:15:46.481038 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 11:15:46.481048 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 11:15:46.481059 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-20 11:15:46.481092 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-20 11:15:46.491260 | orchestrator | + set -e 2025-09-20 11:15:46.492148 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 11:15:46.492177 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 11:15:46.492189 | orchestrator | ++ INTERACTIVE=false 2025-09-20 11:15:46.492200 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 11:15:46.492211 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 11:15:46.492222 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-20 11:15:46.492406 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-20 11:15:46.495881 | orchestrator | 2025-09-20 11:15:46.495959 | orchestrator | # Ceph status 2025-09-20 11:15:46.495981 | orchestrator | 2025-09-20 11:15:46.496001 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 11:15:46.496020 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 11:15:46.496036 | orchestrator | + echo 2025-09-20 11:15:46.496053 | orchestrator | + echo '# Ceph status' 2025-09-20 11:15:46.496096 | orchestrator | + echo 2025-09-20 11:15:46.496115 | orchestrator | + ceph -s 2025-09-20 11:15:47.079005 | orchestrator | cluster: 2025-09-20 11:15:47.079138 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-20 11:15:47.079154 | orchestrator | health: HEALTH_OK 2025-09-20 11:15:47.079166 | orchestrator | 2025-09-20 11:15:47.079177 | orchestrator | services: 2025-09-20 11:15:47.079189 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2025-09-20 11:15:47.079201 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2025-09-20 11:15:47.079213 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-20 11:15:47.079224 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2025-09-20 11:15:47.079235 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-20 11:15:47.079246 | orchestrator | 2025-09-20 11:15:47.079257 | orchestrator | data: 2025-09-20 11:15:47.079268 | orchestrator | volumes: 1/1 healthy 2025-09-20 11:15:47.079279 | orchestrator | pools: 14 pools, 401 pgs 2025-09-20 11:15:47.079290 | orchestrator | objects: 524 objects, 2.2 GiB 2025-09-20 11:15:47.079301 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-20 11:15:47.079312 | orchestrator | pgs: 401 active+clean 2025-09-20 11:15:47.079323 | orchestrator | 2025-09-20 11:15:47.121939 | orchestrator | 2025-09-20 11:15:47.122111 | orchestrator | # Ceph versions 2025-09-20 11:15:47.122126 | orchestrator | 2025-09-20 11:15:47.122136 | orchestrator | + echo 2025-09-20 11:15:47.122145 | orchestrator | + echo '# Ceph versions' 2025-09-20 11:15:47.122155 | orchestrator | + echo 2025-09-20 11:15:47.122164 | orchestrator | + ceph versions 2025-09-20 11:15:47.723990 | orchestrator | { 2025-09-20 11:15:47.724143 | orchestrator | "mon": { 2025-09-20 11:15:47.724160 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 11:15:47.724172 | orchestrator | }, 2025-09-20 11:15:47.724184 | orchestrator | "mgr": { 2025-09-20 11:15:47.724195 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 11:15:47.724206 | orchestrator | }, 2025-09-20 11:15:47.724217 | orchestrator | "osd": { 2025-09-20 11:15:47.724228 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-20 11:15:47.724238 | orchestrator | }, 2025-09-20 11:15:47.724249 | orchestrator | "mds": { 2025-09-20 11:15:47.724260 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 11:15:47.724271 | orchestrator | }, 2025-09-20 11:15:47.724282 | orchestrator | "rgw": { 2025-09-20 11:15:47.724326 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 11:15:47.724338 | orchestrator | }, 2025-09-20 11:15:47.724349 | orchestrator | "overall": { 2025-09-20 11:15:47.724360 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-20 11:15:47.724371 | orchestrator | } 2025-09-20 11:15:47.724382 | orchestrator | } 2025-09-20 11:15:47.767969 | orchestrator | 2025-09-20 11:15:47.768064 | orchestrator | # Ceph OSD tree 2025-09-20 11:15:47.768115 | orchestrator | 2025-09-20 11:15:47.768127 | orchestrator | + echo 2025-09-20 11:15:47.768138 | orchestrator | + echo '# Ceph OSD tree' 2025-09-20 11:15:47.768151 | orchestrator | + echo 2025-09-20 11:15:47.768162 | orchestrator | + ceph osd df tree 2025-09-20 11:15:48.330693 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-20 11:15:48.330793 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 421 MiB 113 GiB 5.91 1.00 - root default 2025-09-20 11:15:48.330804 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-09-20 11:15:48.330813 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.24 1.22 191 up osd.0 2025-09-20 11:15:48.330821 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 936 MiB 867 MiB 1 KiB 70 MiB 19 GiB 4.58 0.77 197 up osd.5 2025-09-20 11:15:48.330829 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2025-09-20 11:15:48.330837 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.41 1.09 192 up osd.2 2025-09-20 11:15:48.330845 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.40 0.91 200 up osd.3 2025-09-20 11:15:48.330852 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2025-09-20 11:15:48.330860 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.12 1.20 204 up osd.1 2025-09-20 11:15:48.330868 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 964 MiB 891 MiB 1 KiB 74 MiB 19 GiB 4.71 0.80 186 up osd.4 2025-09-20 11:15:48.330876 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 421 MiB 113 GiB 5.91 2025-09-20 11:15:48.330884 | orchestrator | MIN/MAX VAR: 0.77/1.22 STDDEV: 1.08 2025-09-20 11:15:48.387745 | orchestrator | 2025-09-20 11:15:48.387846 | orchestrator | # Ceph monitor status 2025-09-20 11:15:48.387861 | orchestrator | 2025-09-20 11:15:48.387873 | orchestrator | + echo 2025-09-20 11:15:48.387884 | orchestrator | + echo '# Ceph monitor status' 2025-09-20 11:15:48.387895 | orchestrator | + echo 2025-09-20 11:15:48.387905 | orchestrator | + ceph mon stat 2025-09-20 11:15:49.019405 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 10, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-20 11:15:49.065260 | orchestrator | 2025-09-20 11:15:49.065345 | orchestrator | # Ceph quorum status 2025-09-20 11:15:49.065360 | orchestrator | 2025-09-20 11:15:49.065370 | orchestrator | + echo 2025-09-20 11:15:49.065381 | orchestrator | + echo '# Ceph quorum status' 2025-09-20 11:15:49.065391 | orchestrator | + echo 2025-09-20 11:15:49.065414 | orchestrator | + ceph quorum_status 2025-09-20 11:15:49.065867 | orchestrator | + jq 2025-09-20 11:15:49.716883 | orchestrator | { 2025-09-20 11:15:49.716987 | orchestrator | "election_epoch": 10, 2025-09-20 11:15:49.717004 | orchestrator | "quorum": [ 2025-09-20 11:15:49.717016 | orchestrator | 0, 2025-09-20 11:15:49.717027 | orchestrator | 1, 2025-09-20 11:15:49.717038 | orchestrator | 2 2025-09-20 11:15:49.717048 | orchestrator | ], 2025-09-20 11:15:49.717059 | orchestrator | "quorum_names": [ 2025-09-20 11:15:49.717110 | orchestrator | "testbed-node-0", 2025-09-20 11:15:49.717125 | orchestrator | "testbed-node-1", 2025-09-20 11:15:49.717161 | orchestrator | "testbed-node-2" 2025-09-20 11:15:49.717172 | orchestrator | ], 2025-09-20 11:15:49.717183 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-20 11:15:49.717195 | orchestrator | "quorum_age": 1559, 2025-09-20 11:15:49.717206 | orchestrator | "features": { 2025-09-20 11:15:49.717216 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-20 11:15:49.717431 | orchestrator | "quorum_mon": [ 2025-09-20 11:15:49.717449 | orchestrator | "kraken", 2025-09-20 11:15:49.717460 | orchestrator | "luminous", 2025-09-20 11:15:49.717471 | orchestrator | "mimic", 2025-09-20 11:15:49.717482 | orchestrator | "osdmap-prune", 2025-09-20 11:15:49.717492 | orchestrator | "nautilus", 2025-09-20 11:15:49.717503 | orchestrator | "octopus", 2025-09-20 11:15:49.717514 | orchestrator | "pacific", 2025-09-20 11:15:49.717525 | orchestrator | "elector-pinging", 2025-09-20 11:15:49.717535 | orchestrator | "quincy", 2025-09-20 11:15:49.717546 | orchestrator | "reef" 2025-09-20 11:15:49.717556 | orchestrator | ] 2025-09-20 11:15:49.717567 | orchestrator | }, 2025-09-20 11:15:49.717578 | orchestrator | "monmap": { 2025-09-20 11:15:49.717588 | orchestrator | "epoch": 1, 2025-09-20 11:15:49.717599 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-20 11:15:49.717610 | orchestrator | "modified": "2025-09-20T10:49:21.032959Z", 2025-09-20 11:15:49.717621 | orchestrator | "created": "2025-09-20T10:49:21.032959Z", 2025-09-20 11:15:49.717632 | orchestrator | "min_mon_release": 18, 2025-09-20 11:15:49.717642 | orchestrator | "min_mon_release_name": "reef", 2025-09-20 11:15:49.717653 | orchestrator | "election_strategy": 1, 2025-09-20 11:15:49.717663 | orchestrator | "disallowed_leaders: ": "", 2025-09-20 11:15:49.717674 | orchestrator | "stretch_mode": false, 2025-09-20 11:15:49.717685 | orchestrator | "tiebreaker_mon": "", 2025-09-20 11:15:49.717695 | orchestrator | "removed_ranks: ": "", 2025-09-20 11:15:49.717705 | orchestrator | "features": { 2025-09-20 11:15:49.717716 | orchestrator | "persistent": [ 2025-09-20 11:15:49.717726 | orchestrator | "kraken", 2025-09-20 11:15:49.717737 | orchestrator | "luminous", 2025-09-20 11:15:49.717747 | orchestrator | "mimic", 2025-09-20 11:15:49.717757 | orchestrator | "osdmap-prune", 2025-09-20 11:15:49.717768 | orchestrator | "nautilus", 2025-09-20 11:15:49.717795 | orchestrator | "octopus", 2025-09-20 11:15:49.717806 | orchestrator | "pacific", 2025-09-20 11:15:49.717817 | orchestrator | "elector-pinging", 2025-09-20 11:15:49.717827 | orchestrator | "quincy", 2025-09-20 11:15:49.717838 | orchestrator | "reef" 2025-09-20 11:15:49.717849 | orchestrator | ], 2025-09-20 11:15:49.717859 | orchestrator | "optional": [] 2025-09-20 11:15:49.717871 | orchestrator | }, 2025-09-20 11:15:49.717881 | orchestrator | "mons": [ 2025-09-20 11:15:49.717892 | orchestrator | { 2025-09-20 11:15:49.717903 | orchestrator | "rank": 0, 2025-09-20 11:15:49.717913 | orchestrator | "name": "testbed-node-0", 2025-09-20 11:15:49.717924 | orchestrator | "public_addrs": { 2025-09-20 11:15:49.717935 | orchestrator | "addrvec": [ 2025-09-20 11:15:49.717945 | orchestrator | { 2025-09-20 11:15:49.717955 | orchestrator | "type": "v2", 2025-09-20 11:15:49.717966 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-20 11:15:49.717977 | orchestrator | "nonce": 0 2025-09-20 11:15:49.717987 | orchestrator | }, 2025-09-20 11:15:49.717998 | orchestrator | { 2025-09-20 11:15:49.718008 | orchestrator | "type": "v1", 2025-09-20 11:15:49.718139 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-20 11:15:49.718153 | orchestrator | "nonce": 0 2025-09-20 11:15:49.718165 | orchestrator | } 2025-09-20 11:15:49.718177 | orchestrator | ] 2025-09-20 11:15:49.718189 | orchestrator | }, 2025-09-20 11:15:49.718201 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-20 11:15:49.718212 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-20 11:15:49.718223 | orchestrator | "priority": 0, 2025-09-20 11:15:49.718233 | orchestrator | "weight": 0, 2025-09-20 11:15:49.718244 | orchestrator | "crush_location": "{}" 2025-09-20 11:15:49.718254 | orchestrator | }, 2025-09-20 11:15:49.718265 | orchestrator | { 2025-09-20 11:15:49.718275 | orchestrator | "rank": 1, 2025-09-20 11:15:49.718286 | orchestrator | "name": "testbed-node-1", 2025-09-20 11:15:49.718296 | orchestrator | "public_addrs": { 2025-09-20 11:15:49.718307 | orchestrator | "addrvec": [ 2025-09-20 11:15:49.718317 | orchestrator | { 2025-09-20 11:15:49.718327 | orchestrator | "type": "v2", 2025-09-20 11:15:49.718338 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-20 11:15:49.718349 | orchestrator | "nonce": 0 2025-09-20 11:15:49.718413 | orchestrator | }, 2025-09-20 11:15:49.718424 | orchestrator | { 2025-09-20 11:15:49.718435 | orchestrator | "type": "v1", 2025-09-20 11:15:49.718446 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-20 11:15:49.718456 | orchestrator | "nonce": 0 2025-09-20 11:15:49.718467 | orchestrator | } 2025-09-20 11:15:49.718478 | orchestrator | ] 2025-09-20 11:15:49.718488 | orchestrator | }, 2025-09-20 11:15:49.718499 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-20 11:15:49.718510 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-20 11:15:49.718520 | orchestrator | "priority": 0, 2025-09-20 11:15:49.718531 | orchestrator | "weight": 0, 2025-09-20 11:15:49.718541 | orchestrator | "crush_location": "{}" 2025-09-20 11:15:49.718552 | orchestrator | }, 2025-09-20 11:15:49.718563 | orchestrator | { 2025-09-20 11:15:49.718573 | orchestrator | "rank": 2, 2025-09-20 11:15:49.718584 | orchestrator | "name": "testbed-node-2", 2025-09-20 11:15:49.718594 | orchestrator | "public_addrs": { 2025-09-20 11:15:49.718605 | orchestrator | "addrvec": [ 2025-09-20 11:15:49.718616 | orchestrator | { 2025-09-20 11:15:49.718626 | orchestrator | "type": "v2", 2025-09-20 11:15:49.718637 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-20 11:15:49.718648 | orchestrator | "nonce": 0 2025-09-20 11:15:49.718658 | orchestrator | }, 2025-09-20 11:15:49.718669 | orchestrator | { 2025-09-20 11:15:49.718679 | orchestrator | "type": "v1", 2025-09-20 11:15:49.718690 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-20 11:15:49.718701 | orchestrator | "nonce": 0 2025-09-20 11:15:49.718711 | orchestrator | } 2025-09-20 11:15:49.718722 | orchestrator | ] 2025-09-20 11:15:49.718733 | orchestrator | }, 2025-09-20 11:15:49.718743 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-20 11:15:49.718754 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-20 11:15:49.718765 | orchestrator | "priority": 0, 2025-09-20 11:15:49.718776 | orchestrator | "weight": 0, 2025-09-20 11:15:49.718786 | orchestrator | "crush_location": "{}" 2025-09-20 11:15:49.718797 | orchestrator | } 2025-09-20 11:15:49.718807 | orchestrator | ] 2025-09-20 11:15:49.718818 | orchestrator | } 2025-09-20 11:15:49.718828 | orchestrator | } 2025-09-20 11:15:49.718853 | orchestrator | 2025-09-20 11:15:49.718865 | orchestrator | # Ceph free space status 2025-09-20 11:15:49.718876 | orchestrator | 2025-09-20 11:15:49.718886 | orchestrator | + echo 2025-09-20 11:15:49.718897 | orchestrator | + echo '# Ceph free space status' 2025-09-20 11:15:49.718908 | orchestrator | + echo 2025-09-20 11:15:49.718919 | orchestrator | + ceph df 2025-09-20 11:15:50.315497 | orchestrator | --- RAW STORAGE --- 2025-09-20 11:15:50.315604 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-20 11:15:50.315632 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-09-20 11:15:50.315645 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-09-20 11:15:50.315657 | orchestrator | 2025-09-20 11:15:50.315669 | orchestrator | --- POOLS --- 2025-09-20 11:15:50.315681 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-20 11:15:50.315709 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-20 11:15:50.315721 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-20 11:15:50.315731 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-20 11:15:50.315742 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-20 11:15:50.315753 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-20 11:15:50.315764 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-20 11:15:50.315775 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-20 11:15:50.315785 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-20 11:15:50.315796 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-09-20 11:15:50.315806 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 11:15:50.315817 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 11:15:50.315828 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-09-20 11:15:50.315859 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 11:15:50.315871 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 11:15:50.362128 | orchestrator | ++ semver latest 5.0.0 2025-09-20 11:15:50.425399 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 11:15:50.425482 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 11:15:50.425494 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-20 11:15:50.425502 | orchestrator | + osism apply facts 2025-09-20 11:16:02.474417 | orchestrator | 2025-09-20 11:16:02 | INFO  | Task b3e0f858-4fd2-448c-9e9c-5a5a61d54faf (facts) was prepared for execution. 2025-09-20 11:16:02.474545 | orchestrator | 2025-09-20 11:16:02 | INFO  | It takes a moment until task b3e0f858-4fd2-448c-9e9c-5a5a61d54faf (facts) has been started and output is visible here. 2025-09-20 11:16:15.233728 | orchestrator | 2025-09-20 11:16:15.233834 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-20 11:16:15.233846 | orchestrator | 2025-09-20 11:16:15.233856 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 11:16:15.233865 | orchestrator | Saturday 20 September 2025 11:16:06 +0000 (0:00:00.279) 0:00:00.279 **** 2025-09-20 11:16:15.233873 | orchestrator | ok: [testbed-manager] 2025-09-20 11:16:15.233883 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:15.233892 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:15.233900 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:15.233908 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:16:15.233916 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:16:15.233925 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:16:15.233933 | orchestrator | 2025-09-20 11:16:15.233941 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 11:16:15.233949 | orchestrator | Saturday 20 September 2025 11:16:07 +0000 (0:00:01.145) 0:00:01.425 **** 2025-09-20 11:16:15.233957 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:16:15.233966 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:15.233975 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:16:15.233983 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:16:15.233991 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:16:15.233999 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:16:15.234007 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:16:15.234068 | orchestrator | 2025-09-20 11:16:15.234077 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 11:16:15.234086 | orchestrator | 2025-09-20 11:16:15.234093 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 11:16:15.234101 | orchestrator | Saturday 20 September 2025 11:16:09 +0000 (0:00:01.349) 0:00:02.774 **** 2025-09-20 11:16:15.234160 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:15.234169 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:15.234177 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:15.234185 | orchestrator | ok: [testbed-manager] 2025-09-20 11:16:15.234192 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:16:15.234200 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:16:15.234208 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:16:15.234216 | orchestrator | 2025-09-20 11:16:15.234224 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 11:16:15.234232 | orchestrator | 2025-09-20 11:16:15.234240 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 11:16:15.234248 | orchestrator | Saturday 20 September 2025 11:16:14 +0000 (0:00:05.275) 0:00:08.049 **** 2025-09-20 11:16:15.234256 | orchestrator | skipping: [testbed-manager] 2025-09-20 11:16:15.234264 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:15.234272 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:16:15.234280 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:16:15.234288 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:16:15.234295 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:16:15.234303 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:16:15.234333 | orchestrator | 2025-09-20 11:16:15.234341 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:16:15.234350 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234359 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234367 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234375 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234383 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234391 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234399 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:15.234407 | orchestrator | 2025-09-20 11:16:15.234415 | orchestrator | 2025-09-20 11:16:15.234423 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:16:15.234431 | orchestrator | Saturday 20 September 2025 11:16:14 +0000 (0:00:00.522) 0:00:08.571 **** 2025-09-20 11:16:15.234439 | orchestrator | =============================================================================== 2025-09-20 11:16:15.234447 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.28s 2025-09-20 11:16:15.234455 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2025-09-20 11:16:15.234463 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2025-09-20 11:16:15.234471 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-20 11:16:15.445055 | orchestrator | + osism validate ceph-mons 2025-09-20 11:16:46.394102 | orchestrator | 2025-09-20 11:16:46.394231 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-20 11:16:46.394244 | orchestrator | 2025-09-20 11:16:46.394254 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-20 11:16:46.394264 | orchestrator | Saturday 20 September 2025 11:16:31 +0000 (0:00:00.440) 0:00:00.440 **** 2025-09-20 11:16:46.394273 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:46.394282 | orchestrator | 2025-09-20 11:16:46.394291 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-20 11:16:46.394300 | orchestrator | Saturday 20 September 2025 11:16:32 +0000 (0:00:00.663) 0:00:01.104 **** 2025-09-20 11:16:46.394308 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:46.394317 | orchestrator | 2025-09-20 11:16:46.394326 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-20 11:16:46.394334 | orchestrator | Saturday 20 September 2025 11:16:33 +0000 (0:00:00.855) 0:00:01.959 **** 2025-09-20 11:16:46.394343 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.394353 | orchestrator | 2025-09-20 11:16:46.394362 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-20 11:16:46.394371 | orchestrator | Saturday 20 September 2025 11:16:33 +0000 (0:00:00.247) 0:00:02.207 **** 2025-09-20 11:16:46.394380 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.394388 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:46.394397 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:46.394406 | orchestrator | 2025-09-20 11:16:46.394414 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-20 11:16:46.394423 | orchestrator | Saturday 20 September 2025 11:16:33 +0000 (0:00:00.323) 0:00:02.531 **** 2025-09-20 11:16:46.394454 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.394463 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:46.394488 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:46.394497 | orchestrator | 2025-09-20 11:16:46.394506 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-20 11:16:46.394514 | orchestrator | Saturday 20 September 2025 11:16:34 +0000 (0:00:00.993) 0:00:03.525 **** 2025-09-20 11:16:46.394523 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.394532 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:16:46.394540 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:16:46.394560 | orchestrator | 2025-09-20 11:16:46.394569 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-20 11:16:46.394588 | orchestrator | Saturday 20 September 2025 11:16:35 +0000 (0:00:00.307) 0:00:03.832 **** 2025-09-20 11:16:46.394597 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.394606 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:46.394617 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:46.394626 | orchestrator | 2025-09-20 11:16:46.394636 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:16:46.394646 | orchestrator | Saturday 20 September 2025 11:16:35 +0000 (0:00:00.410) 0:00:04.243 **** 2025-09-20 11:16:46.394655 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.394665 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:46.394675 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:46.394684 | orchestrator | 2025-09-20 11:16:46.394694 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-20 11:16:46.394704 | orchestrator | Saturday 20 September 2025 11:16:35 +0000 (0:00:00.290) 0:00:04.533 **** 2025-09-20 11:16:46.394714 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.394723 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:16:46.394733 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:16:46.394743 | orchestrator | 2025-09-20 11:16:46.394752 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-20 11:16:46.394763 | orchestrator | Saturday 20 September 2025 11:16:36 +0000 (0:00:00.269) 0:00:04.803 **** 2025-09-20 11:16:46.394772 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.394781 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:16:46.394791 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:16:46.394801 | orchestrator | 2025-09-20 11:16:46.394811 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 11:16:46.394821 | orchestrator | Saturday 20 September 2025 11:16:36 +0000 (0:00:00.293) 0:00:05.096 **** 2025-09-20 11:16:46.394831 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.394840 | orchestrator | 2025-09-20 11:16:46.394850 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 11:16:46.394860 | orchestrator | Saturday 20 September 2025 11:16:36 +0000 (0:00:00.521) 0:00:05.618 **** 2025-09-20 11:16:46.394870 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.394879 | orchestrator | 2025-09-20 11:16:46.394893 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 11:16:46.394903 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.233) 0:00:05.851 **** 2025-09-20 11:16:46.394913 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.394923 | orchestrator | 2025-09-20 11:16:46.394933 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:16:46.394943 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.219) 0:00:06.070 **** 2025-09-20 11:16:46.394953 | orchestrator | 2025-09-20 11:16:46.394963 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:16:46.394971 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.064) 0:00:06.135 **** 2025-09-20 11:16:46.394980 | orchestrator | 2025-09-20 11:16:46.394989 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:16:46.394997 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.064) 0:00:06.199 **** 2025-09-20 11:16:46.395013 | orchestrator | 2025-09-20 11:16:46.395023 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 11:16:46.395031 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.091) 0:00:06.291 **** 2025-09-20 11:16:46.395040 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395049 | orchestrator | 2025-09-20 11:16:46.395057 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-20 11:16:46.395066 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.230) 0:00:06.521 **** 2025-09-20 11:16:46.395074 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395083 | orchestrator | 2025-09-20 11:16:46.395107 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-20 11:16:46.395116 | orchestrator | Saturday 20 September 2025 11:16:37 +0000 (0:00:00.226) 0:00:06.748 **** 2025-09-20 11:16:46.395125 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395134 | orchestrator | 2025-09-20 11:16:46.395188 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-20 11:16:46.395200 | orchestrator | Saturday 20 September 2025 11:16:38 +0000 (0:00:00.108) 0:00:06.857 **** 2025-09-20 11:16:46.395209 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:16:46.395217 | orchestrator | 2025-09-20 11:16:46.395226 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-20 11:16:46.395235 | orchestrator | Saturday 20 September 2025 11:16:39 +0000 (0:00:01.475) 0:00:08.332 **** 2025-09-20 11:16:46.395243 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395252 | orchestrator | 2025-09-20 11:16:46.395260 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-20 11:16:46.395269 | orchestrator | Saturday 20 September 2025 11:16:39 +0000 (0:00:00.285) 0:00:08.618 **** 2025-09-20 11:16:46.395277 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395286 | orchestrator | 2025-09-20 11:16:46.395295 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-20 11:16:46.395303 | orchestrator | Saturday 20 September 2025 11:16:40 +0000 (0:00:00.253) 0:00:08.872 **** 2025-09-20 11:16:46.395312 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395320 | orchestrator | 2025-09-20 11:16:46.395329 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-20 11:16:46.395337 | orchestrator | Saturday 20 September 2025 11:16:40 +0000 (0:00:00.293) 0:00:09.165 **** 2025-09-20 11:16:46.395346 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395354 | orchestrator | 2025-09-20 11:16:46.395363 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-20 11:16:46.395372 | orchestrator | Saturday 20 September 2025 11:16:40 +0000 (0:00:00.271) 0:00:09.437 **** 2025-09-20 11:16:46.395380 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395389 | orchestrator | 2025-09-20 11:16:46.395397 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-20 11:16:46.395406 | orchestrator | Saturday 20 September 2025 11:16:40 +0000 (0:00:00.111) 0:00:09.548 **** 2025-09-20 11:16:46.395414 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395423 | orchestrator | 2025-09-20 11:16:46.395431 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-20 11:16:46.395440 | orchestrator | Saturday 20 September 2025 11:16:40 +0000 (0:00:00.123) 0:00:09.671 **** 2025-09-20 11:16:46.395449 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395457 | orchestrator | 2025-09-20 11:16:46.395466 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-20 11:16:46.395474 | orchestrator | Saturday 20 September 2025 11:16:41 +0000 (0:00:00.114) 0:00:09.786 **** 2025-09-20 11:16:46.395483 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:16:46.395491 | orchestrator | 2025-09-20 11:16:46.395500 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-20 11:16:46.395508 | orchestrator | Saturday 20 September 2025 11:16:42 +0000 (0:00:01.235) 0:00:11.021 **** 2025-09-20 11:16:46.395517 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395532 | orchestrator | 2025-09-20 11:16:46.395541 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-20 11:16:46.395549 | orchestrator | Saturday 20 September 2025 11:16:42 +0000 (0:00:00.272) 0:00:11.293 **** 2025-09-20 11:16:46.395558 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395566 | orchestrator | 2025-09-20 11:16:46.395575 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-20 11:16:46.395583 | orchestrator | Saturday 20 September 2025 11:16:42 +0000 (0:00:00.138) 0:00:11.432 **** 2025-09-20 11:16:46.395592 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:16:46.395600 | orchestrator | 2025-09-20 11:16:46.395609 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-20 11:16:46.395617 | orchestrator | Saturday 20 September 2025 11:16:42 +0000 (0:00:00.152) 0:00:11.585 **** 2025-09-20 11:16:46.395626 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395634 | orchestrator | 2025-09-20 11:16:46.395643 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-20 11:16:46.395651 | orchestrator | Saturday 20 September 2025 11:16:42 +0000 (0:00:00.131) 0:00:11.716 **** 2025-09-20 11:16:46.395660 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395668 | orchestrator | 2025-09-20 11:16:46.395677 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-20 11:16:46.395685 | orchestrator | Saturday 20 September 2025 11:16:43 +0000 (0:00:00.358) 0:00:12.075 **** 2025-09-20 11:16:46.395694 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:46.395703 | orchestrator | 2025-09-20 11:16:46.395711 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-20 11:16:46.395720 | orchestrator | Saturday 20 September 2025 11:16:43 +0000 (0:00:00.253) 0:00:12.328 **** 2025-09-20 11:16:46.395728 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:16:46.395737 | orchestrator | 2025-09-20 11:16:46.395745 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 11:16:46.395754 | orchestrator | Saturday 20 September 2025 11:16:43 +0000 (0:00:00.258) 0:00:12.587 **** 2025-09-20 11:16:46.395762 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:46.395771 | orchestrator | 2025-09-20 11:16:46.395780 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 11:16:46.395788 | orchestrator | Saturday 20 September 2025 11:16:45 +0000 (0:00:01.757) 0:00:14.345 **** 2025-09-20 11:16:46.395797 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:46.395805 | orchestrator | 2025-09-20 11:16:46.395814 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 11:16:46.395822 | orchestrator | Saturday 20 September 2025 11:16:45 +0000 (0:00:00.284) 0:00:14.629 **** 2025-09-20 11:16:46.395831 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:46.395840 | orchestrator | 2025-09-20 11:16:46.395854 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:16:48.820412 | orchestrator | Saturday 20 September 2025 11:16:46 +0000 (0:00:00.245) 0:00:14.875 **** 2025-09-20 11:16:48.820525 | orchestrator | 2025-09-20 11:16:48.820542 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:16:48.820554 | orchestrator | Saturday 20 September 2025 11:16:46 +0000 (0:00:00.069) 0:00:14.945 **** 2025-09-20 11:16:48.820564 | orchestrator | 2025-09-20 11:16:48.820576 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:16:48.820586 | orchestrator | Saturday 20 September 2025 11:16:46 +0000 (0:00:00.114) 0:00:15.059 **** 2025-09-20 11:16:48.820597 | orchestrator | 2025-09-20 11:16:48.820608 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-20 11:16:48.820619 | orchestrator | Saturday 20 September 2025 11:16:46 +0000 (0:00:00.078) 0:00:15.138 **** 2025-09-20 11:16:48.820630 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:16:48.820641 | orchestrator | 2025-09-20 11:16:48.820677 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 11:16:48.820688 | orchestrator | Saturday 20 September 2025 11:16:48 +0000 (0:00:01.683) 0:00:16.821 **** 2025-09-20 11:16:48.820698 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-20 11:16:48.820709 | orchestrator |  "msg": [ 2025-09-20 11:16:48.820721 | orchestrator |  "Validator run completed.", 2025-09-20 11:16:48.820733 | orchestrator |  "You can find the report file here:", 2025-09-20 11:16:48.820744 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-20T11:16:32+00:00-report.json", 2025-09-20 11:16:48.820756 | orchestrator |  "on the following host:", 2025-09-20 11:16:48.820767 | orchestrator |  "testbed-manager" 2025-09-20 11:16:48.820778 | orchestrator |  ] 2025-09-20 11:16:48.820789 | orchestrator | } 2025-09-20 11:16:48.820800 | orchestrator | 2025-09-20 11:16:48.820811 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:16:48.820823 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 11:16:48.820835 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:48.820846 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:16:48.820857 | orchestrator | 2025-09-20 11:16:48.820868 | orchestrator | 2025-09-20 11:16:48.820878 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:16:48.820889 | orchestrator | Saturday 20 September 2025 11:16:48 +0000 (0:00:00.544) 0:00:17.365 **** 2025-09-20 11:16:48.820900 | orchestrator | =============================================================================== 2025-09-20 11:16:48.820911 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2025-09-20 11:16:48.820922 | orchestrator | Write report file ------------------------------------------------------- 1.68s 2025-09-20 11:16:48.820932 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.48s 2025-09-20 11:16:48.820944 | orchestrator | Gather status data ------------------------------------------------------ 1.24s 2025-09-20 11:16:48.820956 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-09-20 11:16:48.820969 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-09-20 11:16:48.820981 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-09-20 11:16:48.820993 | orchestrator | Print report file information ------------------------------------------- 0.54s 2025-09-20 11:16:48.821024 | orchestrator | Aggregate test results step one ----------------------------------------- 0.52s 2025-09-20 11:16:48.821037 | orchestrator | Set test result to passed if container is existing ---------------------- 0.41s 2025-09-20 11:16:48.821049 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.36s 2025-09-20 11:16:48.821061 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-09-20 11:16:48.821078 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-09-20 11:16:48.821091 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-09-20 11:16:48.821104 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.29s 2025-09-20 11:16:48.821116 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-09-20 11:16:48.821128 | orchestrator | Set quorum test data ---------------------------------------------------- 0.29s 2025-09-20 11:16:48.821141 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-09-20 11:16:48.821177 | orchestrator | Set health test data ---------------------------------------------------- 0.27s 2025-09-20 11:16:48.821190 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.27s 2025-09-20 11:16:49.099591 | orchestrator | + osism validate ceph-mgrs 2025-09-20 11:17:19.485810 | orchestrator | 2025-09-20 11:17:19.485922 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-20 11:17:19.485939 | orchestrator | 2025-09-20 11:17:19.485952 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-20 11:17:19.485964 | orchestrator | Saturday 20 September 2025 11:17:05 +0000 (0:00:00.463) 0:00:00.463 **** 2025-09-20 11:17:19.485976 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.485987 | orchestrator | 2025-09-20 11:17:19.485998 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-20 11:17:19.486010 | orchestrator | Saturday 20 September 2025 11:17:05 +0000 (0:00:00.699) 0:00:01.163 **** 2025-09-20 11:17:19.486072 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.486084 | orchestrator | 2025-09-20 11:17:19.486095 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-20 11:17:19.486107 | orchestrator | Saturday 20 September 2025 11:17:06 +0000 (0:00:00.846) 0:00:02.010 **** 2025-09-20 11:17:19.486119 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.486132 | orchestrator | 2025-09-20 11:17:19.486143 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-20 11:17:19.486154 | orchestrator | Saturday 20 September 2025 11:17:07 +0000 (0:00:00.276) 0:00:02.287 **** 2025-09-20 11:17:19.486166 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.486177 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:17:19.486217 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:17:19.486229 | orchestrator | 2025-09-20 11:17:19.486241 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-20 11:17:19.486251 | orchestrator | Saturday 20 September 2025 11:17:07 +0000 (0:00:00.312) 0:00:02.599 **** 2025-09-20 11:17:19.486262 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.486273 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:17:19.486285 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:17:19.486296 | orchestrator | 2025-09-20 11:17:19.486308 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-20 11:17:19.486319 | orchestrator | Saturday 20 September 2025 11:17:08 +0000 (0:00:00.978) 0:00:03.578 **** 2025-09-20 11:17:19.486338 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.486358 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:17:19.486379 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:17:19.486398 | orchestrator | 2025-09-20 11:17:19.486417 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-20 11:17:19.486436 | orchestrator | Saturday 20 September 2025 11:17:08 +0000 (0:00:00.310) 0:00:03.888 **** 2025-09-20 11:17:19.486456 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.486474 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:17:19.486494 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:17:19.486513 | orchestrator | 2025-09-20 11:17:19.486532 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:17:19.486552 | orchestrator | Saturday 20 September 2025 11:17:09 +0000 (0:00:00.526) 0:00:04.414 **** 2025-09-20 11:17:19.486571 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.486589 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:17:19.486601 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:17:19.486612 | orchestrator | 2025-09-20 11:17:19.486623 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-20 11:17:19.486635 | orchestrator | Saturday 20 September 2025 11:17:09 +0000 (0:00:00.313) 0:00:04.727 **** 2025-09-20 11:17:19.486646 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.486657 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:17:19.486667 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:17:19.486678 | orchestrator | 2025-09-20 11:17:19.486689 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-20 11:17:19.486700 | orchestrator | Saturday 20 September 2025 11:17:09 +0000 (0:00:00.285) 0:00:05.013 **** 2025-09-20 11:17:19.486735 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.486747 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:17:19.486757 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:17:19.486768 | orchestrator | 2025-09-20 11:17:19.486779 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 11:17:19.486790 | orchestrator | Saturday 20 September 2025 11:17:10 +0000 (0:00:00.314) 0:00:05.328 **** 2025-09-20 11:17:19.486800 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.486811 | orchestrator | 2025-09-20 11:17:19.486824 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 11:17:19.486843 | orchestrator | Saturday 20 September 2025 11:17:10 +0000 (0:00:00.756) 0:00:06.084 **** 2025-09-20 11:17:19.486862 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.486879 | orchestrator | 2025-09-20 11:17:19.486896 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 11:17:19.486914 | orchestrator | Saturday 20 September 2025 11:17:11 +0000 (0:00:00.284) 0:00:06.368 **** 2025-09-20 11:17:19.486932 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.486952 | orchestrator | 2025-09-20 11:17:19.486970 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:19.486985 | orchestrator | Saturday 20 September 2025 11:17:11 +0000 (0:00:00.254) 0:00:06.623 **** 2025-09-20 11:17:19.486996 | orchestrator | 2025-09-20 11:17:19.487007 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:19.487033 | orchestrator | Saturday 20 September 2025 11:17:11 +0000 (0:00:00.099) 0:00:06.722 **** 2025-09-20 11:17:19.487044 | orchestrator | 2025-09-20 11:17:19.487055 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:19.487066 | orchestrator | Saturday 20 September 2025 11:17:11 +0000 (0:00:00.076) 0:00:06.798 **** 2025-09-20 11:17:19.487077 | orchestrator | 2025-09-20 11:17:19.487088 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 11:17:19.487099 | orchestrator | Saturday 20 September 2025 11:17:11 +0000 (0:00:00.076) 0:00:06.875 **** 2025-09-20 11:17:19.487109 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.487120 | orchestrator | 2025-09-20 11:17:19.487131 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-20 11:17:19.487141 | orchestrator | Saturday 20 September 2025 11:17:11 +0000 (0:00:00.243) 0:00:07.118 **** 2025-09-20 11:17:19.487152 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.487163 | orchestrator | 2025-09-20 11:17:19.487239 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-20 11:17:19.487257 | orchestrator | Saturday 20 September 2025 11:17:12 +0000 (0:00:00.259) 0:00:07.378 **** 2025-09-20 11:17:19.487268 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.487279 | orchestrator | 2025-09-20 11:17:19.487289 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-20 11:17:19.487300 | orchestrator | Saturday 20 September 2025 11:17:12 +0000 (0:00:00.115) 0:00:07.494 **** 2025-09-20 11:17:19.487310 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:17:19.487321 | orchestrator | 2025-09-20 11:17:19.487332 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-20 11:17:19.487343 | orchestrator | Saturday 20 September 2025 11:17:14 +0000 (0:00:01.911) 0:00:09.406 **** 2025-09-20 11:17:19.487353 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.487364 | orchestrator | 2025-09-20 11:17:19.487374 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-20 11:17:19.487385 | orchestrator | Saturday 20 September 2025 11:17:14 +0000 (0:00:00.261) 0:00:09.667 **** 2025-09-20 11:17:19.487395 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.487406 | orchestrator | 2025-09-20 11:17:19.487417 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-20 11:17:19.487428 | orchestrator | Saturday 20 September 2025 11:17:15 +0000 (0:00:00.594) 0:00:10.262 **** 2025-09-20 11:17:19.487438 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.487459 | orchestrator | 2025-09-20 11:17:19.487470 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-20 11:17:19.487481 | orchestrator | Saturday 20 September 2025 11:17:15 +0000 (0:00:00.121) 0:00:10.384 **** 2025-09-20 11:17:19.487491 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:17:19.487502 | orchestrator | 2025-09-20 11:17:19.487513 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-20 11:17:19.487524 | orchestrator | Saturday 20 September 2025 11:17:15 +0000 (0:00:00.160) 0:00:10.545 **** 2025-09-20 11:17:19.487538 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.487557 | orchestrator | 2025-09-20 11:17:19.487574 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-20 11:17:19.487592 | orchestrator | Saturday 20 September 2025 11:17:15 +0000 (0:00:00.250) 0:00:10.795 **** 2025-09-20 11:17:19.487609 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:17:19.487626 | orchestrator | 2025-09-20 11:17:19.487643 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 11:17:19.487659 | orchestrator | Saturday 20 September 2025 11:17:15 +0000 (0:00:00.237) 0:00:11.033 **** 2025-09-20 11:17:19.487676 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.487692 | orchestrator | 2025-09-20 11:17:19.487708 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 11:17:19.487724 | orchestrator | Saturday 20 September 2025 11:17:16 +0000 (0:00:01.128) 0:00:12.161 **** 2025-09-20 11:17:19.487741 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.487758 | orchestrator | 2025-09-20 11:17:19.487776 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 11:17:19.487793 | orchestrator | Saturday 20 September 2025 11:17:17 +0000 (0:00:00.231) 0:00:12.393 **** 2025-09-20 11:17:19.487812 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.487828 | orchestrator | 2025-09-20 11:17:19.487846 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:19.487863 | orchestrator | Saturday 20 September 2025 11:17:17 +0000 (0:00:00.239) 0:00:12.632 **** 2025-09-20 11:17:19.487879 | orchestrator | 2025-09-20 11:17:19.487894 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:19.487911 | orchestrator | Saturday 20 September 2025 11:17:17 +0000 (0:00:00.065) 0:00:12.698 **** 2025-09-20 11:17:19.487931 | orchestrator | 2025-09-20 11:17:19.487949 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:19.487968 | orchestrator | Saturday 20 September 2025 11:17:17 +0000 (0:00:00.070) 0:00:12.768 **** 2025-09-20 11:17:19.487988 | orchestrator | 2025-09-20 11:17:19.488004 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-20 11:17:19.488023 | orchestrator | Saturday 20 September 2025 11:17:17 +0000 (0:00:00.078) 0:00:12.847 **** 2025-09-20 11:17:19.488040 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:19.488059 | orchestrator | 2025-09-20 11:17:19.488078 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 11:17:19.488096 | orchestrator | Saturday 20 September 2025 11:17:19 +0000 (0:00:01.471) 0:00:14.319 **** 2025-09-20 11:17:19.488112 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-20 11:17:19.488123 | orchestrator |  "msg": [ 2025-09-20 11:17:19.488134 | orchestrator |  "Validator run completed.", 2025-09-20 11:17:19.488145 | orchestrator |  "You can find the report file here:", 2025-09-20 11:17:19.488157 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-20T11:17:05+00:00-report.json", 2025-09-20 11:17:19.488169 | orchestrator |  "on the following host:", 2025-09-20 11:17:19.488180 | orchestrator |  "testbed-manager" 2025-09-20 11:17:19.488213 | orchestrator |  ] 2025-09-20 11:17:19.488224 | orchestrator | } 2025-09-20 11:17:19.488235 | orchestrator | 2025-09-20 11:17:19.488257 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:17:19.488270 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 11:17:19.488282 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:17:19.488306 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:17:19.691865 | orchestrator | 2025-09-20 11:17:19.691955 | orchestrator | 2025-09-20 11:17:19.691966 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:17:19.691977 | orchestrator | Saturday 20 September 2025 11:17:19 +0000 (0:00:00.370) 0:00:14.689 **** 2025-09-20 11:17:19.691985 | orchestrator | =============================================================================== 2025-09-20 11:17:19.691994 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.91s 2025-09-20 11:17:19.692002 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2025-09-20 11:17:19.692010 | orchestrator | Aggregate test results step one ----------------------------------------- 1.13s 2025-09-20 11:17:19.692020 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-09-20 11:17:19.692029 | orchestrator | Create report output directory ------------------------------------------ 0.85s 2025-09-20 11:17:19.692039 | orchestrator | Aggregate test results step one ----------------------------------------- 0.76s 2025-09-20 11:17:19.692048 | orchestrator | Get timestamp for report file ------------------------------------------- 0.70s 2025-09-20 11:17:19.692058 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.59s 2025-09-20 11:17:19.692068 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-09-20 11:17:19.692077 | orchestrator | Print report file information ------------------------------------------- 0.37s 2025-09-20 11:17:19.692087 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-09-20 11:17:19.692097 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-09-20 11:17:19.692106 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-09-20 11:17:19.692116 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-09-20 11:17:19.692125 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-09-20 11:17:19.692135 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-09-20 11:17:19.692145 | orchestrator | Define report vars ------------------------------------------------------ 0.28s 2025-09-20 11:17:19.692154 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-09-20 11:17:19.692184 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2025-09-20 11:17:19.692285 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-09-20 11:17:19.918101 | orchestrator | + osism validate ceph-osds 2025-09-20 11:17:40.379155 | orchestrator | 2025-09-20 11:17:40.379319 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-20 11:17:40.379339 | orchestrator | 2025-09-20 11:17:40.379351 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-20 11:17:40.379363 | orchestrator | Saturday 20 September 2025 11:17:36 +0000 (0:00:00.436) 0:00:00.436 **** 2025-09-20 11:17:40.379374 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:40.379385 | orchestrator | 2025-09-20 11:17:40.379397 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 11:17:40.379408 | orchestrator | Saturday 20 September 2025 11:17:36 +0000 (0:00:00.664) 0:00:01.101 **** 2025-09-20 11:17:40.379419 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:40.379454 | orchestrator | 2025-09-20 11:17:40.379466 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-20 11:17:40.379477 | orchestrator | Saturday 20 September 2025 11:17:37 +0000 (0:00:00.241) 0:00:01.342 **** 2025-09-20 11:17:40.379488 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:40.379499 | orchestrator | 2025-09-20 11:17:40.379509 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-20 11:17:40.379520 | orchestrator | Saturday 20 September 2025 11:17:38 +0000 (0:00:00.984) 0:00:02.327 **** 2025-09-20 11:17:40.379533 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:40.379552 | orchestrator | 2025-09-20 11:17:40.379571 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-20 11:17:40.379582 | orchestrator | Saturday 20 September 2025 11:17:38 +0000 (0:00:00.151) 0:00:02.478 **** 2025-09-20 11:17:40.379593 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:40.379604 | orchestrator | 2025-09-20 11:17:40.379615 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-20 11:17:40.379626 | orchestrator | Saturday 20 September 2025 11:17:38 +0000 (0:00:00.137) 0:00:02.616 **** 2025-09-20 11:17:40.379636 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:40.379647 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:40.379658 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:40.379669 | orchestrator | 2025-09-20 11:17:40.379682 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-20 11:17:40.379709 | orchestrator | Saturday 20 September 2025 11:17:38 +0000 (0:00:00.291) 0:00:02.907 **** 2025-09-20 11:17:40.379721 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:40.379733 | orchestrator | 2025-09-20 11:17:40.379746 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-20 11:17:40.379758 | orchestrator | Saturday 20 September 2025 11:17:38 +0000 (0:00:00.151) 0:00:03.058 **** 2025-09-20 11:17:40.379770 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:40.379783 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:40.379795 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:40.379806 | orchestrator | 2025-09-20 11:17:40.379817 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-20 11:17:40.379827 | orchestrator | Saturday 20 September 2025 11:17:39 +0000 (0:00:00.317) 0:00:03.376 **** 2025-09-20 11:17:40.379838 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:40.379851 | orchestrator | 2025-09-20 11:17:40.379869 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:17:40.379880 | orchestrator | Saturday 20 September 2025 11:17:39 +0000 (0:00:00.529) 0:00:03.906 **** 2025-09-20 11:17:40.379891 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:40.379902 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:40.379913 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:40.379923 | orchestrator | 2025-09-20 11:17:40.379934 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-20 11:17:40.379945 | orchestrator | Saturday 20 September 2025 11:17:40 +0000 (0:00:00.471) 0:00:04.377 **** 2025-09-20 11:17:40.379959 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6e387ef717c3e44e96c2adeeb1514cef89e8a17eeae1511059c8053a9c1cb3ea', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-20 11:17:40.379973 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47169ebbd79020505d67fe78409f6634c3bb353c6a92fac2175c22647dc61bf4', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-20 11:17:40.379986 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76183e870f7c6b70b0c8a28cc757f1d782f91757a6289750fa23bcc9e3b4e12a', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 11:17:40.380000 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d0b63a562a00796fea70f0dad6737631462d529e7563e4778bd3249611e27c7', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 11:17:40.380026 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05ab458f8bbe72b23a70f954009a8315bc12ecfa55c2311157b046cd60bb9cbf', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-20 11:17:40.380056 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7db8fe7bd6437410bb94f7bb1258fb84866de7463b701d501053ae46368f6c3e', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-20 11:17:40.380068 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8b48a70ddeae75ad319637ea79c7cb039579de70f04b3f558d2589a9909a4d77', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-20 11:17:40.380080 | orchestrator | skipping: [testbed-node-3] => (item={'id': '20d29361c06991b584106f29be7bd325d7bc838d584e95c629945ce3ccdeeffc', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 11:17:40.380091 | orchestrator | skipping: [testbed-node-3] => (item={'id': '32d7a008bdbcbf17b3fa7863d51bc08b3f1c966729a41cfde68fcdf6d7b78c8e', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 11:17:40.380110 | orchestrator | skipping: [testbed-node-3] => (item={'id': '301d90fea2d7652b498dfd4206897c631a8cb25adfe32be147e3f68dbfe463e9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-09-20 11:17:40.380126 | orchestrator | skipping: [testbed-node-3] => (item={'id': '249b06fcacc34aa23ce2f97518f97437c32b6c6670e1d6a23e14fc2dde228b16', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-20 11:17:40.380143 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f4758a12787b3e4febf280c2abb9f25ce76bfa8e8e406a289a47c3f25cd1e798', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-20 11:17:40.380155 | orchestrator | ok: [testbed-node-3] => (item={'id': '97b8a867aefab4e3c9f56e5e664557e35a3d6cbb165ea403e776b4ab92181793', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-09-20 11:17:40.380167 | orchestrator | ok: [testbed-node-3] => (item={'id': '40c60d35b5a78797a70468b22861479f8549acf563704920457180c8c4ec0179', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-09-20 11:17:40.380178 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c7d1b5880d5b38a27ca5f2e3ed898f35ae0251f4a63e465f765fc1592d89d74', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-09-20 11:17:40.380189 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c19ad4407860605f6351ac31c756eb7e2b944d818b7eecc1d6da8413d3ffb119', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-09-20 11:17:40.380201 | orchestrator | skipping: [testbed-node-3] => (item={'id': '517be2932a6007910c1f152ec9e5f131dd493c0b2d79527c67bb57a876416e44', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-20 11:17:40.380237 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85ca93e71bb9a43a4ff3a70d36b0a31df7118ca550db07a7166d3bc8daf705ea', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 11:17:40.380249 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7bbf94ccac325fa03bbfeb3418f98847a2f0233978828e7bc3034b7d585b06d4', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 11:17:40.380261 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'df48170f8a4eb8510956eb51ef869cf949d5ecae7455c23de095c2909c217547', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-20 11:17:40.380272 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97a95754685512ff176a5bdd2183bbcd0a70968e47a54bb3601f20be4059e8bc', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-20 11:17:40.380291 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7033d8d41383cb91f50b015396254fc54b12c9c19fa8f5eedc4733b8512011be', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-20 11:17:40.618808 | orchestrator | skipping: [testbed-node-4] => (item={'id': '34f2078730ec3d1e913d8d1cf01d81b0b0e65ce67143a76a6cfeaa224aebc4e1', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 11:17:40.618888 | orchestrator | skipping: [testbed-node-4] => (item={'id': '665e6d6891846293046eab2cb7d1774f29784ad064d0236a91f13ae3e4e1725e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 11:17:40.618899 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ded79da72b65eed9eecac0949988d584aacc3a344fdc91cdd45121535b1aecd9', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-20 11:17:40.618908 | orchestrator | skipping: [testbed-node-4] => (item={'id': '34c2ac240a37288f171f5d17bb1ddf3b2a84c2aa0f861b3a13ff88f70306812d', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-20 11:17:40.618915 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3a096bc731a678d794d0434b2f6f5fac1fb1ae15dce63efe678140ce2767ede', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-20 11:17:40.618922 | orchestrator | skipping: [testbed-node-4] => (item={'id': '134decdb6c42bfd27078b89028b6aadddad8dcedbedacee255f6db5159f7127d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 11:17:40.618929 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e063eb46819669a499645400ac6b86ebd4c25f2f1c2e5dd66fed6f39c90dd996', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 11:17:40.618937 | orchestrator | skipping: [testbed-node-4] => (item={'id': '022438a6536392391c7466453dd5fd8578e442966b629853dee005932cc0a45e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-09-20 11:17:40.618944 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a5cdf8c4cb0e5c5aefeff6be3eef0a4aaf58abb821ba704fda79bff0d90541c6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-20 11:17:40.618967 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79ea1f162ada32f341c9528473d958996b4a87ff66032d99f6f86e5c64747556', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-20 11:17:40.618974 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e08f4cdd31074c829184ca99181ebad95cf069ca432f00e19f085bf48fc13b74', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-09-20 11:17:40.618982 | orchestrator | ok: [testbed-node-4] => (item={'id': '5339c30fe0457e11421d81f3ee636dfa97963e07e22d78f353d28fb1ff2bdb03', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-09-20 11:17:40.618988 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e4ac7c4dbe08b89a7faa0d0963f7a3c3d91d242869636bbf9da052356985fc5', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-09-20 11:17:40.618996 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e91e299f36a51d9d07af5c5eddcb532a55624e22ddd1b133a8f7531b2511bd20', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-09-20 11:17:40.619016 | orchestrator | skipping: [testbed-node-4] => (item={'id': '87172a134f40cffcd52b5bd97a1c26187539f197ea063167c5ccc3263444dee5', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-20 11:17:40.619035 | orchestrator | skipping: [testbed-node-4] => (item={'id': '84d14b71fc0cd595b3a5edba095b5ec322c17b16738a7de96caf7221b3ac628f', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 11:17:40.619043 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20ff489145e0a01f094e4826a1856b5830c4686db0b4b0df632d6eb96ffccc3f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 11:17:40.619050 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ce3a1bfca3163e3ec0c673aaa649a9f581181d327e5e217098e830ce1a12510f', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-20 11:17:40.619056 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aae276368f118d6977b07241df90cb1dfdc3c7aef18459cefcabc70803680c46', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-20 11:17:40.619064 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25dcf50b1b22bbb013842043d196c434d1a7883f498584e30f4f6a3791b51813', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-20 11:17:40.619071 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8a78c49202fcc6e8235061411566b5efc30ef0889bc62f8503f203490d23145', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 11:17:40.619081 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f44e2bbdb5f6b5b5a992475bc62acfbad08ca90f30fce28b05d5a53ce8ff3800', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 11:17:40.619089 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f41631f155ea6b1f413eceb0094f5bd16ce46bb87392ae10b5216a5acbd1e7e', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-20 11:17:40.619095 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd595dc291f5d9d86f28f21297745fb112f44626e1097e0fa921592d67a648e5', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-20 11:17:40.619107 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5aa7563c37e937ae98753f96d4936c9200c5647c5137db7380b1ea0200f4a326', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-20 11:17:40.619114 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11ea907676ea0e2e071caf21ecb523abfcf5b9a31ba931a746d833477823004e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 11:17:40.619121 | orchestrator | skipping: [testbed-node-5] => (item={'id': '24c75229435c34eca871405ca4896900fbb0525e0051138bb8a85108f9d8f74c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 11:17:40.619127 | orchestrator | skipping: [testbed-node-5] => (item={'id': '54979dfad4b2fd6703fba038dd403a32008f0bc464b08f3ce8daf840e8698146', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-09-20 11:17:40.619134 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c09b45e3160fe0c31ed0b46b1200908509d750e2a1d254aee30f6365ecbaa33', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-20 11:17:40.619141 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4487e78dee025f262e0a3caecdbd20782b98a6cf5a8059983af3551a771d7bbd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-20 11:17:40.619152 | orchestrator | ok: [testbed-node-5] => (item={'id': '709b403bbab5ca7bc6de6aa2486db20b3905be50ab391389b60ff64a8637b76b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-09-20 11:17:48.467867 | orchestrator | ok: [testbed-node-5] => (item={'id': '8e7b5556cd641c7f6b0dda017c63387aa6786810a9519b603b74db057a628bdc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-09-20 11:17:48.467982 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1c76e3fb0fd5ea0ab1c72f41311be96dd2ceaddf6dadb26490dbd5e96b89c35f', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-09-20 11:17:48.467999 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c31d2c8965dd3083d178bc0c26ef0852b9b9b1ca9eaee3e1300530c29a8978f7', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-09-20 11:17:48.468013 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2a178a88481f299aa3518db0b61b43cc5fccc91975a7282d5847f318d69b5bb', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-20 11:17:48.468026 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b652eb77c6205588060fecc1fc34521f8a448fa3d7487b3efc4891f306cb001e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 11:17:48.468056 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a02993b4dfbffd3eeb921e980083cf8cd7ebf3477f33df09277b688ccbacd19f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 11:17:48.468068 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1094ab9b778d4e555d9cb00d1329be9663306d23f4b6225f9aab6b130c3610f7', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-20 11:17:48.468103 | orchestrator | 2025-09-20 11:17:48.468117 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-20 11:17:48.468130 | orchestrator | Saturday 20 September 2025 11:17:40 +0000 (0:00:00.501) 0:00:04.879 **** 2025-09-20 11:17:48.468141 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.468154 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.468165 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.468176 | orchestrator | 2025-09-20 11:17:48.468187 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-20 11:17:48.468199 | orchestrator | Saturday 20 September 2025 11:17:40 +0000 (0:00:00.312) 0:00:05.191 **** 2025-09-20 11:17:48.468210 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.468222 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:48.468288 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:48.468301 | orchestrator | 2025-09-20 11:17:48.468312 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-20 11:17:48.468323 | orchestrator | Saturday 20 September 2025 11:17:41 +0000 (0:00:00.311) 0:00:05.503 **** 2025-09-20 11:17:48.468334 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.468344 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.468355 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.468366 | orchestrator | 2025-09-20 11:17:48.468377 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:17:48.468389 | orchestrator | Saturday 20 September 2025 11:17:41 +0000 (0:00:00.524) 0:00:06.028 **** 2025-09-20 11:17:48.468401 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.468413 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.468426 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.468438 | orchestrator | 2025-09-20 11:17:48.468451 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-20 11:17:48.468464 | orchestrator | Saturday 20 September 2025 11:17:42 +0000 (0:00:00.311) 0:00:06.339 **** 2025-09-20 11:17:48.468476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-20 11:17:48.468490 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-20 11:17:48.468502 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.468514 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-20 11:17:48.468526 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-20 11:17:48.468539 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:48.468552 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-20 11:17:48.468564 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-20 11:17:48.468576 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:48.468588 | orchestrator | 2025-09-20 11:17:48.468601 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-20 11:17:48.468613 | orchestrator | Saturday 20 September 2025 11:17:42 +0000 (0:00:00.335) 0:00:06.674 **** 2025-09-20 11:17:48.468626 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.468638 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.468651 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.468662 | orchestrator | 2025-09-20 11:17:48.468692 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-20 11:17:48.468706 | orchestrator | Saturday 20 September 2025 11:17:42 +0000 (0:00:00.333) 0:00:07.008 **** 2025-09-20 11:17:48.468718 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.468731 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:48.468743 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:48.468762 | orchestrator | 2025-09-20 11:17:48.468773 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-20 11:17:48.468784 | orchestrator | Saturday 20 September 2025 11:17:43 +0000 (0:00:00.544) 0:00:07.552 **** 2025-09-20 11:17:48.468795 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.468805 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:48.468816 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:48.468827 | orchestrator | 2025-09-20 11:17:48.468838 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-20 11:17:48.468849 | orchestrator | Saturday 20 September 2025 11:17:43 +0000 (0:00:00.328) 0:00:07.881 **** 2025-09-20 11:17:48.468859 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.468870 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.468881 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.468892 | orchestrator | 2025-09-20 11:17:48.468902 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 11:17:48.468913 | orchestrator | Saturday 20 September 2025 11:17:43 +0000 (0:00:00.308) 0:00:08.189 **** 2025-09-20 11:17:48.468924 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.468935 | orchestrator | 2025-09-20 11:17:48.468946 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 11:17:48.468957 | orchestrator | Saturday 20 September 2025 11:17:44 +0000 (0:00:00.246) 0:00:08.435 **** 2025-09-20 11:17:48.468968 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.468979 | orchestrator | 2025-09-20 11:17:48.468990 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 11:17:48.469001 | orchestrator | Saturday 20 September 2025 11:17:44 +0000 (0:00:00.281) 0:00:08.717 **** 2025-09-20 11:17:48.469011 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.469022 | orchestrator | 2025-09-20 11:17:48.469033 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:48.469045 | orchestrator | Saturday 20 September 2025 11:17:44 +0000 (0:00:00.205) 0:00:08.922 **** 2025-09-20 11:17:48.469055 | orchestrator | 2025-09-20 11:17:48.469066 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:48.469077 | orchestrator | Saturday 20 September 2025 11:17:44 +0000 (0:00:00.062) 0:00:08.985 **** 2025-09-20 11:17:48.469088 | orchestrator | 2025-09-20 11:17:48.469098 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:48.469109 | orchestrator | Saturday 20 September 2025 11:17:44 +0000 (0:00:00.078) 0:00:09.063 **** 2025-09-20 11:17:48.469120 | orchestrator | 2025-09-20 11:17:48.469131 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 11:17:48.469142 | orchestrator | Saturday 20 September 2025 11:17:44 +0000 (0:00:00.197) 0:00:09.261 **** 2025-09-20 11:17:48.469152 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.469163 | orchestrator | 2025-09-20 11:17:48.469174 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-20 11:17:48.469184 | orchestrator | Saturday 20 September 2025 11:17:45 +0000 (0:00:00.249) 0:00:09.510 **** 2025-09-20 11:17:48.469195 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.469206 | orchestrator | 2025-09-20 11:17:48.469217 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:17:48.469228 | orchestrator | Saturday 20 September 2025 11:17:45 +0000 (0:00:00.246) 0:00:09.757 **** 2025-09-20 11:17:48.469258 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.469269 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.469280 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.469291 | orchestrator | 2025-09-20 11:17:48.469302 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-20 11:17:48.469313 | orchestrator | Saturday 20 September 2025 11:17:45 +0000 (0:00:00.289) 0:00:10.046 **** 2025-09-20 11:17:48.469323 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.469334 | orchestrator | 2025-09-20 11:17:48.469345 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-20 11:17:48.469362 | orchestrator | Saturday 20 September 2025 11:17:45 +0000 (0:00:00.220) 0:00:10.266 **** 2025-09-20 11:17:48.469373 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 11:17:48.469384 | orchestrator | 2025-09-20 11:17:48.469395 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-20 11:17:48.469406 | orchestrator | Saturday 20 September 2025 11:17:47 +0000 (0:00:01.482) 0:00:11.749 **** 2025-09-20 11:17:48.469416 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.469427 | orchestrator | 2025-09-20 11:17:48.469438 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-20 11:17:48.469449 | orchestrator | Saturday 20 September 2025 11:17:47 +0000 (0:00:00.116) 0:00:11.865 **** 2025-09-20 11:17:48.469459 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.469470 | orchestrator | 2025-09-20 11:17:48.469481 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-20 11:17:48.469492 | orchestrator | Saturday 20 September 2025 11:17:47 +0000 (0:00:00.287) 0:00:12.153 **** 2025-09-20 11:17:48.469503 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:48.469513 | orchestrator | 2025-09-20 11:17:48.469524 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-20 11:17:48.469535 | orchestrator | Saturday 20 September 2025 11:17:47 +0000 (0:00:00.088) 0:00:12.241 **** 2025-09-20 11:17:48.469546 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.469556 | orchestrator | 2025-09-20 11:17:48.469567 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:17:48.469578 | orchestrator | Saturday 20 September 2025 11:17:48 +0000 (0:00:00.105) 0:00:12.346 **** 2025-09-20 11:17:48.469589 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:48.469600 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:48.469610 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:48.469621 | orchestrator | 2025-09-20 11:17:48.469632 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-20 11:17:48.469650 | orchestrator | Saturday 20 September 2025 11:17:48 +0000 (0:00:00.397) 0:00:12.743 **** 2025-09-20 11:17:59.988361 | orchestrator | changed: [testbed-node-3] 2025-09-20 11:17:59.988513 | orchestrator | changed: [testbed-node-4] 2025-09-20 11:17:59.988540 | orchestrator | changed: [testbed-node-5] 2025-09-20 11:17:59.988560 | orchestrator | 2025-09-20 11:17:59.988582 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-20 11:17:59.988603 | orchestrator | Saturday 20 September 2025 11:17:50 +0000 (0:00:02.265) 0:00:15.010 **** 2025-09-20 11:17:59.988623 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.988645 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.988663 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.988683 | orchestrator | 2025-09-20 11:17:59.988702 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-20 11:17:59.988722 | orchestrator | Saturday 20 September 2025 11:17:50 +0000 (0:00:00.263) 0:00:15.273 **** 2025-09-20 11:17:59.988742 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.988759 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.988778 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.988796 | orchestrator | 2025-09-20 11:17:59.988817 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-20 11:17:59.988837 | orchestrator | Saturday 20 September 2025 11:17:51 +0000 (0:00:00.429) 0:00:15.703 **** 2025-09-20 11:17:59.988856 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:59.988876 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:59.988894 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:59.988915 | orchestrator | 2025-09-20 11:17:59.988934 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-20 11:17:59.988954 | orchestrator | Saturday 20 September 2025 11:17:51 +0000 (0:00:00.400) 0:00:16.104 **** 2025-09-20 11:17:59.988974 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.988994 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.989014 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.989065 | orchestrator | 2025-09-20 11:17:59.989085 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-20 11:17:59.989104 | orchestrator | Saturday 20 September 2025 11:17:52 +0000 (0:00:00.287) 0:00:16.392 **** 2025-09-20 11:17:59.989182 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:59.989212 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:59.989230 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:59.989283 | orchestrator | 2025-09-20 11:17:59.989305 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-20 11:17:59.989322 | orchestrator | Saturday 20 September 2025 11:17:52 +0000 (0:00:00.250) 0:00:16.642 **** 2025-09-20 11:17:59.989338 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:59.989356 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:59.989375 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:59.989393 | orchestrator | 2025-09-20 11:17:59.989410 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 11:17:59.989429 | orchestrator | Saturday 20 September 2025 11:17:52 +0000 (0:00:00.249) 0:00:16.892 **** 2025-09-20 11:17:59.989448 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.989466 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.989483 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.989497 | orchestrator | 2025-09-20 11:17:59.989508 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-20 11:17:59.989518 | orchestrator | Saturday 20 September 2025 11:17:53 +0000 (0:00:00.668) 0:00:17.560 **** 2025-09-20 11:17:59.989529 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.989540 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.989550 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.989561 | orchestrator | 2025-09-20 11:17:59.989571 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-20 11:17:59.989582 | orchestrator | Saturday 20 September 2025 11:17:53 +0000 (0:00:00.433) 0:00:17.994 **** 2025-09-20 11:17:59.989593 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.989603 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.989614 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.989624 | orchestrator | 2025-09-20 11:17:59.989635 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-20 11:17:59.989645 | orchestrator | Saturday 20 September 2025 11:17:53 +0000 (0:00:00.269) 0:00:18.264 **** 2025-09-20 11:17:59.989663 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:59.989681 | orchestrator | skipping: [testbed-node-4] 2025-09-20 11:17:59.989698 | orchestrator | skipping: [testbed-node-5] 2025-09-20 11:17:59.989715 | orchestrator | 2025-09-20 11:17:59.989732 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-20 11:17:59.989751 | orchestrator | Saturday 20 September 2025 11:17:54 +0000 (0:00:00.275) 0:00:18.540 **** 2025-09-20 11:17:59.989770 | orchestrator | ok: [testbed-node-3] 2025-09-20 11:17:59.989787 | orchestrator | ok: [testbed-node-4] 2025-09-20 11:17:59.989803 | orchestrator | ok: [testbed-node-5] 2025-09-20 11:17:59.989814 | orchestrator | 2025-09-20 11:17:59.989825 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-20 11:17:59.989836 | orchestrator | Saturday 20 September 2025 11:17:54 +0000 (0:00:00.414) 0:00:18.954 **** 2025-09-20 11:17:59.989846 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:59.989858 | orchestrator | 2025-09-20 11:17:59.989869 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-20 11:17:59.989880 | orchestrator | Saturday 20 September 2025 11:17:54 +0000 (0:00:00.222) 0:00:19.177 **** 2025-09-20 11:17:59.989890 | orchestrator | skipping: [testbed-node-3] 2025-09-20 11:17:59.989907 | orchestrator | 2025-09-20 11:17:59.989925 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 11:17:59.989944 | orchestrator | Saturday 20 September 2025 11:17:55 +0000 (0:00:00.244) 0:00:19.421 **** 2025-09-20 11:17:59.989963 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:59.990000 | orchestrator | 2025-09-20 11:17:59.990106 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 11:17:59.990136 | orchestrator | Saturday 20 September 2025 11:17:56 +0000 (0:00:01.449) 0:00:20.871 **** 2025-09-20 11:17:59.990155 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:59.990176 | orchestrator | 2025-09-20 11:17:59.990198 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 11:17:59.990218 | orchestrator | Saturday 20 September 2025 11:17:56 +0000 (0:00:00.226) 0:00:21.098 **** 2025-09-20 11:17:59.990301 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:59.990324 | orchestrator | 2025-09-20 11:17:59.990343 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:59.990362 | orchestrator | Saturday 20 September 2025 11:17:57 +0000 (0:00:00.224) 0:00:21.322 **** 2025-09-20 11:17:59.990380 | orchestrator | 2025-09-20 11:17:59.990398 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:59.990413 | orchestrator | Saturday 20 September 2025 11:17:57 +0000 (0:00:00.064) 0:00:21.386 **** 2025-09-20 11:17:59.990424 | orchestrator | 2025-09-20 11:17:59.990435 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 11:17:59.990445 | orchestrator | Saturday 20 September 2025 11:17:57 +0000 (0:00:00.061) 0:00:21.448 **** 2025-09-20 11:17:59.990456 | orchestrator | 2025-09-20 11:17:59.990467 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-20 11:17:59.990477 | orchestrator | Saturday 20 September 2025 11:17:57 +0000 (0:00:00.062) 0:00:21.511 **** 2025-09-20 11:17:59.990488 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 11:17:59.990499 | orchestrator | 2025-09-20 11:17:59.990509 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 11:17:59.990520 | orchestrator | Saturday 20 September 2025 11:17:58 +0000 (0:00:01.503) 0:00:23.014 **** 2025-09-20 11:17:59.990531 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-20 11:17:59.990541 | orchestrator |  "msg": [ 2025-09-20 11:17:59.990553 | orchestrator |  "Validator run completed.", 2025-09-20 11:17:59.990564 | orchestrator |  "You can find the report file here:", 2025-09-20 11:17:59.990574 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-20T11:17:36+00:00-report.json", 2025-09-20 11:17:59.990587 | orchestrator |  "on the following host:", 2025-09-20 11:17:59.990598 | orchestrator |  "testbed-manager" 2025-09-20 11:17:59.990608 | orchestrator |  ] 2025-09-20 11:17:59.990619 | orchestrator | } 2025-09-20 11:17:59.990630 | orchestrator | 2025-09-20 11:17:59.990651 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:17:59.990672 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-20 11:17:59.990691 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 11:17:59.990709 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 11:17:59.990728 | orchestrator | 2025-09-20 11:17:59.990746 | orchestrator | 2025-09-20 11:17:59.990766 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:17:59.990778 | orchestrator | Saturday 20 September 2025 11:17:59 +0000 (0:00:00.905) 0:00:23.919 **** 2025-09-20 11:17:59.990788 | orchestrator | =============================================================================== 2025-09-20 11:17:59.990799 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.27s 2025-09-20 11:17:59.990810 | orchestrator | Write report file ------------------------------------------------------- 1.50s 2025-09-20 11:17:59.990821 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.48s 2025-09-20 11:17:59.990847 | orchestrator | Aggregate test results step one ----------------------------------------- 1.45s 2025-09-20 11:17:59.990857 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2025-09-20 11:17:59.990868 | orchestrator | Print report file information ------------------------------------------- 0.91s 2025-09-20 11:17:59.990879 | orchestrator | Prepare test data ------------------------------------------------------- 0.67s 2025-09-20 11:17:59.990889 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-09-20 11:17:59.990900 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.54s 2025-09-20 11:17:59.990911 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.53s 2025-09-20 11:17:59.990921 | orchestrator | Set test result to passed if count matches ------------------------------ 0.52s 2025-09-20 11:17:59.990932 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-09-20 11:17:59.990942 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-09-20 11:17:59.990953 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.43s 2025-09-20 11:17:59.990963 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.43s 2025-09-20 11:17:59.990974 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.41s 2025-09-20 11:17:59.990984 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.40s 2025-09-20 11:17:59.990995 | orchestrator | Prepare test data ------------------------------------------------------- 0.40s 2025-09-20 11:17:59.991005 | orchestrator | Flush handlers ---------------------------------------------------------- 0.34s 2025-09-20 11:17:59.991016 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.34s 2025-09-20 11:18:00.349535 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-20 11:18:00.358362 | orchestrator | + set -e 2025-09-20 11:18:00.358404 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 11:18:00.358421 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 11:18:00.358440 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 11:18:00.358458 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 11:18:00.358478 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 11:18:00.358497 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 11:18:00.358519 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 11:18:00.359141 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 11:18:00.359165 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 11:18:00.359178 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 11:18:00.359191 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 11:18:00.359204 | orchestrator | ++ export ARA=false 2025-09-20 11:18:00.359216 | orchestrator | ++ ARA=false 2025-09-20 11:18:00.359229 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 11:18:00.359241 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 11:18:00.359277 | orchestrator | ++ export TEMPEST=false 2025-09-20 11:18:00.359289 | orchestrator | ++ TEMPEST=false 2025-09-20 11:18:00.359301 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 11:18:00.359314 | orchestrator | ++ IS_ZUUL=true 2025-09-20 11:18:00.359883 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 11:18:00.359912 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2025-09-20 11:18:00.359932 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 11:18:00.359950 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 11:18:00.359970 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 11:18:00.359988 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 11:18:00.360003 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 11:18:00.360014 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 11:18:00.360024 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 11:18:00.360035 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 11:18:00.360046 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-20 11:18:00.360056 | orchestrator | + source /etc/os-release 2025-09-20 11:18:00.360067 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-20 11:18:00.360078 | orchestrator | ++ NAME=Ubuntu 2025-09-20 11:18:00.360089 | orchestrator | ++ VERSION_ID=24.04 2025-09-20 11:18:00.360099 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-20 11:18:00.360110 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-20 11:18:00.360148 | orchestrator | ++ ID=ubuntu 2025-09-20 11:18:00.360160 | orchestrator | ++ ID_LIKE=debian 2025-09-20 11:18:00.360171 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-20 11:18:00.360181 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-20 11:18:00.360192 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-20 11:18:00.360203 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-20 11:18:00.360215 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-20 11:18:00.360226 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-20 11:18:00.360237 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-20 11:18:00.360269 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-20 11:18:00.360282 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-20 11:18:00.388211 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-20 11:18:23.848007 | orchestrator | 2025-09-20 11:18:23.848147 | orchestrator | # Status of Elasticsearch 2025-09-20 11:18:23.848164 | orchestrator | 2025-09-20 11:18:23.848176 | orchestrator | + pushd /opt/configuration/contrib 2025-09-20 11:18:23.848189 | orchestrator | + echo 2025-09-20 11:18:23.848200 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-20 11:18:23.848211 | orchestrator | + echo 2025-09-20 11:18:23.848222 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-20 11:18:24.042905 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-20 11:18:24.043010 | orchestrator | 2025-09-20 11:18:24.043025 | orchestrator | # Status of MariaDB 2025-09-20 11:18:24.043037 | orchestrator | 2025-09-20 11:18:24.043049 | orchestrator | + echo 2025-09-20 11:18:24.043060 | orchestrator | + echo '# Status of MariaDB' 2025-09-20 11:18:24.043075 | orchestrator | + echo 2025-09-20 11:18:24.043101 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-20 11:18:24.043125 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-20 11:18:24.099531 | orchestrator | Reading package lists... 2025-09-20 11:18:24.466793 | orchestrator | Building dependency tree... 2025-09-20 11:18:24.467212 | orchestrator | Reading state information... 2025-09-20 11:18:24.849114 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-20 11:18:24.849202 | orchestrator | bc set to manually installed. 2025-09-20 11:18:24.849212 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-20 11:18:25.546986 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-20 11:18:25.547407 | orchestrator | 2025-09-20 11:18:25.547428 | orchestrator | # Status of Prometheus 2025-09-20 11:18:25.547440 | orchestrator | 2025-09-20 11:18:25.547452 | orchestrator | + echo 2025-09-20 11:18:25.547463 | orchestrator | + echo '# Status of Prometheus' 2025-09-20 11:18:25.547474 | orchestrator | + echo 2025-09-20 11:18:25.547485 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-20 11:18:25.609787 | orchestrator | Unauthorized 2025-09-20 11:18:25.612809 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-20 11:18:25.676766 | orchestrator | Unauthorized 2025-09-20 11:18:25.679563 | orchestrator | 2025-09-20 11:18:25.679609 | orchestrator | # Status of RabbitMQ 2025-09-20 11:18:25.679622 | orchestrator | 2025-09-20 11:18:25.679634 | orchestrator | + echo 2025-09-20 11:18:25.679645 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-20 11:18:25.679656 | orchestrator | + echo 2025-09-20 11:18:25.679667 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-20 11:18:26.146797 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-20 11:18:26.155446 | orchestrator | 2025-09-20 11:18:26.155497 | orchestrator | # Status of Redis 2025-09-20 11:18:26.155511 | orchestrator | 2025-09-20 11:18:26.155523 | orchestrator | + echo 2025-09-20 11:18:26.155535 | orchestrator | + echo '# Status of Redis' 2025-09-20 11:18:26.155547 | orchestrator | + echo 2025-09-20 11:18:26.155560 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-20 11:18:26.162751 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001535s;;;0.000000;10.000000 2025-09-20 11:18:26.162777 | orchestrator | 2025-09-20 11:18:26.162789 | orchestrator | # Create backup of MariaDB database 2025-09-20 11:18:26.162801 | orchestrator | 2025-09-20 11:18:26.162812 | orchestrator | + popd 2025-09-20 11:18:26.162823 | orchestrator | + echo 2025-09-20 11:18:26.162834 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-20 11:18:26.162844 | orchestrator | + echo 2025-09-20 11:18:26.162855 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-20 11:18:28.148969 | orchestrator | 2025-09-20 11:18:28 | INFO  | Task aa2d26b9-017d-4398-9bda-fdde887455d0 (mariadb_backup) was prepared for execution. 2025-09-20 11:18:28.149066 | orchestrator | 2025-09-20 11:18:28 | INFO  | It takes a moment until task aa2d26b9-017d-4398-9bda-fdde887455d0 (mariadb_backup) has been started and output is visible here. 2025-09-20 11:19:02.735833 | orchestrator | 2025-09-20 11:19:02.735957 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 11:19:02.735974 | orchestrator | 2025-09-20 11:19:02.735985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 11:19:02.735998 | orchestrator | Saturday 20 September 2025 11:18:31 +0000 (0:00:00.178) 0:00:00.178 **** 2025-09-20 11:19:02.736010 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:19:02.736022 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:19:02.736033 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:19:02.736044 | orchestrator | 2025-09-20 11:19:02.736055 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 11:19:02.736066 | orchestrator | Saturday 20 September 2025 11:18:32 +0000 (0:00:00.301) 0:00:00.479 **** 2025-09-20 11:19:02.736077 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-20 11:19:02.736088 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-20 11:19:02.736099 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-20 11:19:02.736109 | orchestrator | 2025-09-20 11:19:02.736120 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-20 11:19:02.736131 | orchestrator | 2025-09-20 11:19:02.736142 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-20 11:19:02.736152 | orchestrator | Saturday 20 September 2025 11:18:32 +0000 (0:00:00.511) 0:00:00.990 **** 2025-09-20 11:19:02.736164 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 11:19:02.736176 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-20 11:19:02.736187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-20 11:19:02.736198 | orchestrator | 2025-09-20 11:19:02.736208 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 11:19:02.736219 | orchestrator | Saturday 20 September 2025 11:18:32 +0000 (0:00:00.359) 0:00:01.350 **** 2025-09-20 11:19:02.736230 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 11:19:02.736242 | orchestrator | 2025-09-20 11:19:02.736253 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-20 11:19:02.736264 | orchestrator | Saturday 20 September 2025 11:18:33 +0000 (0:00:00.493) 0:00:01.844 **** 2025-09-20 11:19:02.736274 | orchestrator | ok: [testbed-node-0] 2025-09-20 11:19:02.736285 | orchestrator | ok: [testbed-node-1] 2025-09-20 11:19:02.736296 | orchestrator | ok: [testbed-node-2] 2025-09-20 11:19:02.736307 | orchestrator | 2025-09-20 11:19:02.736318 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-20 11:19:02.736329 | orchestrator | Saturday 20 September 2025 11:18:36 +0000 (0:00:02.876) 0:00:04.720 **** 2025-09-20 11:19:02.736367 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-20 11:19:02.736380 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-20 11:19:02.736393 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-20 11:19:02.736431 | orchestrator | mariadb_bootstrap_restart 2025-09-20 11:19:02.736445 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:19:02.736458 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:19:02.736471 | orchestrator | changed: [testbed-node-0] 2025-09-20 11:19:02.736484 | orchestrator | 2025-09-20 11:19:02.736497 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-20 11:19:02.736510 | orchestrator | skipping: no hosts matched 2025-09-20 11:19:02.736522 | orchestrator | 2025-09-20 11:19:02.736534 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-20 11:19:02.736547 | orchestrator | skipping: no hosts matched 2025-09-20 11:19:02.736559 | orchestrator | 2025-09-20 11:19:02.736571 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-20 11:19:02.736584 | orchestrator | skipping: no hosts matched 2025-09-20 11:19:02.736596 | orchestrator | 2025-09-20 11:19:02.736609 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-20 11:19:02.736622 | orchestrator | 2025-09-20 11:19:02.736634 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-20 11:19:02.736646 | orchestrator | Saturday 20 September 2025 11:19:01 +0000 (0:00:25.447) 0:00:30.167 **** 2025-09-20 11:19:02.736659 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:19:02.736671 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:19:02.736684 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:19:02.736697 | orchestrator | 2025-09-20 11:19:02.736710 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-20 11:19:02.736722 | orchestrator | Saturday 20 September 2025 11:19:02 +0000 (0:00:00.280) 0:00:30.448 **** 2025-09-20 11:19:02.736733 | orchestrator | skipping: [testbed-node-0] 2025-09-20 11:19:02.736744 | orchestrator | skipping: [testbed-node-1] 2025-09-20 11:19:02.736755 | orchestrator | skipping: [testbed-node-2] 2025-09-20 11:19:02.736766 | orchestrator | 2025-09-20 11:19:02.736776 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:19:02.736789 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 11:19:02.736801 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 11:19:02.736813 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 11:19:02.736824 | orchestrator | 2025-09-20 11:19:02.736835 | orchestrator | 2025-09-20 11:19:02.736846 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:19:02.736857 | orchestrator | Saturday 20 September 2025 11:19:02 +0000 (0:00:00.380) 0:00:30.828 **** 2025-09-20 11:19:02.736868 | orchestrator | =============================================================================== 2025-09-20 11:19:02.736879 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 25.45s 2025-09-20 11:19:02.736907 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.88s 2025-09-20 11:19:02.736919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-09-20 11:19:02.736930 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.49s 2025-09-20 11:19:02.736941 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.38s 2025-09-20 11:19:02.736951 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.36s 2025-09-20 11:19:02.736962 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-20 11:19:02.736973 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2025-09-20 11:19:02.944999 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-20 11:19:02.952593 | orchestrator | + set -e 2025-09-20 11:19:02.952629 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 11:19:02.952668 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 11:19:02.952759 | orchestrator | ++ INTERACTIVE=false 2025-09-20 11:19:02.952773 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 11:19:02.952784 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 11:19:02.952801 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-20 11:19:02.954068 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-20 11:19:02.961593 | orchestrator | 2025-09-20 11:19:02.961658 | orchestrator | # OpenStack endpoints 2025-09-20 11:19:02.961670 | orchestrator | 2025-09-20 11:19:02.961680 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 11:19:02.961690 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 11:19:02.961700 | orchestrator | + export OS_CLOUD=admin 2025-09-20 11:19:02.961709 | orchestrator | + OS_CLOUD=admin 2025-09-20 11:19:02.961719 | orchestrator | + echo 2025-09-20 11:19:02.961729 | orchestrator | + echo '# OpenStack endpoints' 2025-09-20 11:19:02.961738 | orchestrator | + echo 2025-09-20 11:19:02.961748 | orchestrator | + openstack endpoint list 2025-09-20 11:19:06.356519 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-20 11:19:06.356630 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-20 11:19:06.356646 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-20 11:19:06.356675 | orchestrator | | 18ecd628bf084be88624e8f77a784c5b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-20 11:19:06.356687 | orchestrator | | 2074f57cf86142ffad600b8f4536a571 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-20 11:19:06.356698 | orchestrator | | 210ba0bc8c27474a8a96ff62e545f179 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-20 11:19:06.356709 | orchestrator | | 22fc850de76c482dbf4bc1c27775abe7 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-20 11:19:06.356720 | orchestrator | | 431800af94a44e4abe11c4b0c4087d23 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-20 11:19:06.356731 | orchestrator | | 4c64969c303e42d2b1f0c852ccc37199 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-20 11:19:06.356741 | orchestrator | | 4cc232845af04e09bd7af5f3b2844e28 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-20 11:19:06.356752 | orchestrator | | 5f208128caea4e35a2ba41ecef74c162 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-20 11:19:06.356763 | orchestrator | | 6338b3bed3b24fd6b97bbf4cfe21cb26 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-20 11:19:06.356774 | orchestrator | | 635a9e8f56a447fda95c4626bf5a1d92 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-20 11:19:06.356785 | orchestrator | | 6e2ca79bdb974e4fbb226c1a075141ef | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-20 11:19:06.356795 | orchestrator | | 7503efc313d54583aa2b8c6a00bf4faa | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-20 11:19:06.356806 | orchestrator | | 83a6c70e2a154081afffcf57570a07f3 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-20 11:19:06.356842 | orchestrator | | 87690f49d4384743b3d282d8a17ae1a5 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-20 11:19:06.356853 | orchestrator | | 9aa3614ace264068a11a0136b40e4e9c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-20 11:19:06.356864 | orchestrator | | 9bba85b033e24968ab589ff561ea18fa | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-20 11:19:06.356875 | orchestrator | | 9c511d47970f48f5b7977467de8583e8 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-20 11:19:06.356885 | orchestrator | | af4e7c8e462746a3a6e4ece68f9f99c0 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-20 11:19:06.356896 | orchestrator | | bca13b6581fb4a2c86e56f79b49a0c94 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-20 11:19:06.356907 | orchestrator | | e71faa048e114ca5873e69e44f8719ba | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-20 11:19:06.356934 | orchestrator | | f36b779427334cb482d2ebcac97eb4d6 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-20 11:19:06.356946 | orchestrator | | f8caa77396f346ee998b47fb8345d27a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-20 11:19:06.356957 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-20 11:19:06.534432 | orchestrator | 2025-09-20 11:19:06.534570 | orchestrator | # Cinder 2025-09-20 11:19:06.534588 | orchestrator | 2025-09-20 11:19:06.534604 | orchestrator | + echo 2025-09-20 11:19:06.623942 | orchestrator | + echo '# Cinder' 2025-09-20 11:19:06.624039 | orchestrator | + echo 2025-09-20 11:19:06.624054 | orchestrator | + openstack volume service list 2025-09-20 11:19:09.023918 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-20 11:19:09.024006 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-20 11:19:09.024014 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-20 11:19:09.024020 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-20T11:19:03.000000 | 2025-09-20 11:19:09.024025 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-20T11:19:05.000000 | 2025-09-20 11:19:09.024030 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-20T11:19:05.000000 | 2025-09-20 11:19:09.024035 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-20T11:19:06.000000 | 2025-09-20 11:19:09.024040 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-20T11:19:07.000000 | 2025-09-20 11:19:09.024045 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-20T11:18:59.000000 | 2025-09-20 11:19:09.024050 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-20T11:19:08.000000 | 2025-09-20 11:19:09.024055 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-20T11:19:08.000000 | 2025-09-20 11:19:09.024060 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-20T11:18:58.000000 | 2025-09-20 11:19:09.024081 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-20 11:19:09.204213 | orchestrator | 2025-09-20 11:19:09.204302 | orchestrator | # Neutron 2025-09-20 11:19:09.204314 | orchestrator | 2025-09-20 11:19:09.204325 | orchestrator | + echo 2025-09-20 11:19:09.204335 | orchestrator | + echo '# Neutron' 2025-09-20 11:19:09.204397 | orchestrator | + echo 2025-09-20 11:19:09.204408 | orchestrator | + openstack network agent list 2025-09-20 11:19:11.874817 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-20 11:19:11.874905 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-20 11:19:11.874911 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-20 11:19:11.874916 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-20 11:19:11.874921 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-20 11:19:11.874925 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-20 11:19:11.874930 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-20 11:19:11.874934 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-20 11:19:11.874938 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-20 11:19:11.874942 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-20 11:19:11.874963 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-20 11:19:11.875039 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-20 11:19:11.875046 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-20 11:19:12.114988 | orchestrator | + openstack network service provider list 2025-09-20 11:19:14.523619 | orchestrator | +---------------+------+---------+ 2025-09-20 11:19:14.523717 | orchestrator | | Service Type | Name | Default | 2025-09-20 11:19:14.523728 | orchestrator | +---------------+------+---------+ 2025-09-20 11:19:14.523739 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-20 11:19:14.523748 | orchestrator | +---------------+------+---------+ 2025-09-20 11:19:14.738499 | orchestrator | 2025-09-20 11:19:14.738607 | orchestrator | # Nova 2025-09-20 11:19:14.738622 | orchestrator | 2025-09-20 11:19:14.738633 | orchestrator | + echo 2025-09-20 11:19:14.738644 | orchestrator | + echo '# Nova' 2025-09-20 11:19:14.738655 | orchestrator | + echo 2025-09-20 11:19:14.738667 | orchestrator | + openstack compute service list 2025-09-20 11:19:17.305278 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-20 11:19:17.305458 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-20 11:19:17.305476 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-20 11:19:17.305522 | orchestrator | | bf169d77-5e39-4a9e-bbac-f27daf561412 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-20T11:19:08.000000 | 2025-09-20 11:19:17.306400 | orchestrator | | d6d5bc64-1677-45cc-b5de-1c3c08bd6b21 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-20T11:19:11.000000 | 2025-09-20 11:19:17.306429 | orchestrator | | 190e21e5-c492-4232-8595-8b5a67628842 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-20T11:19:08.000000 | 2025-09-20 11:19:17.306440 | orchestrator | | 0944bb24-ba55-46f0-a1a5-02cf0b324f41 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-20T11:19:16.000000 | 2025-09-20 11:19:17.306451 | orchestrator | | 09cedc9f-9010-4f1a-9fa6-0a948cb5fffe | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-20T11:19:09.000000 | 2025-09-20 11:19:17.306462 | orchestrator | | f4d2437c-3357-44af-8f8e-b25814e1398e | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-20T11:19:12.000000 | 2025-09-20 11:19:17.306473 | orchestrator | | bd36efd5-6571-47be-ada9-220c32140c7d | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-20T11:19:08.000000 | 2025-09-20 11:19:17.306484 | orchestrator | | 12e225ac-0b80-40dc-b3d9-ee1e106a0f88 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-20T11:19:09.000000 | 2025-09-20 11:19:17.306495 | orchestrator | | 8d826857-6151-4966-8459-c525bfbc2090 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-20T11:19:09.000000 | 2025-09-20 11:19:17.306506 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-20 11:19:17.491778 | orchestrator | + openstack hypervisor list 2025-09-20 11:19:19.963793 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-20 11:19:19.963899 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-20 11:19:19.963912 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-20 11:19:19.963924 | orchestrator | | 250ae8aa-916e-4678-8ff1-060cc9325a3c | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-20 11:19:19.963935 | orchestrator | | 987eee8f-7ada-491e-a621-49f175615111 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-20 11:19:19.963946 | orchestrator | | 0b02c2ee-d315-40eb-b3ee-6b51fb16746c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-20 11:19:19.963957 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-20 11:19:20.135757 | orchestrator | 2025-09-20 11:19:20.135850 | orchestrator | # Run OpenStack test play 2025-09-20 11:19:20.135864 | orchestrator | 2025-09-20 11:19:20.135875 | orchestrator | + echo 2025-09-20 11:19:20.135885 | orchestrator | + echo '# Run OpenStack test play' 2025-09-20 11:19:20.135895 | orchestrator | + echo 2025-09-20 11:19:20.135905 | orchestrator | + osism apply --environment openstack test 2025-09-20 11:19:21.847960 | orchestrator | 2025-09-20 11:19:21 | INFO  | Trying to run play test in environment openstack 2025-09-20 11:19:31.939743 | orchestrator | 2025-09-20 11:19:31 | INFO  | Task 522377d8-c0e8-4a1d-85e4-792876d79739 (test) was prepared for execution. 2025-09-20 11:19:31.939849 | orchestrator | 2025-09-20 11:19:31 | INFO  | It takes a moment until task 522377d8-c0e8-4a1d-85e4-792876d79739 (test) has been started and output is visible here. 2025-09-20 11:26:02.989221 | orchestrator | 2025-09-20 11:26:02.989334 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-20 11:26:02.989349 | orchestrator | 2025-09-20 11:26:02.989360 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-20 11:26:02.989370 | orchestrator | Saturday 20 September 2025 11:19:35 +0000 (0:00:00.067) 0:00:00.067 **** 2025-09-20 11:26:02.989380 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989391 | orchestrator | 2025-09-20 11:26:02.989423 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-20 11:26:02.989434 | orchestrator | Saturday 20 September 2025 11:19:38 +0000 (0:00:03.255) 0:00:03.323 **** 2025-09-20 11:26:02.989444 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989454 | orchestrator | 2025-09-20 11:26:02.989483 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-20 11:26:02.989494 | orchestrator | Saturday 20 September 2025 11:19:42 +0000 (0:00:04.091) 0:00:07.414 **** 2025-09-20 11:26:02.989503 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989513 | orchestrator | 2025-09-20 11:26:02.989522 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-20 11:26:02.989532 | orchestrator | Saturday 20 September 2025 11:19:49 +0000 (0:00:06.499) 0:00:13.913 **** 2025-09-20 11:26:02.989542 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989551 | orchestrator | 2025-09-20 11:26:02.989561 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-20 11:26:02.989571 | orchestrator | Saturday 20 September 2025 11:19:53 +0000 (0:00:03.923) 0:00:17.837 **** 2025-09-20 11:26:02.989580 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989590 | orchestrator | 2025-09-20 11:26:02.989599 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-20 11:26:02.989609 | orchestrator | Saturday 20 September 2025 11:19:57 +0000 (0:00:04.018) 0:00:21.856 **** 2025-09-20 11:26:02.989619 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-20 11:26:02.989629 | orchestrator | changed: [localhost] => (item=member) 2025-09-20 11:26:02.989640 | orchestrator | changed: [localhost] => (item=creator) 2025-09-20 11:26:02.989649 | orchestrator | 2025-09-20 11:26:02.989659 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-20 11:26:02.989741 | orchestrator | Saturday 20 September 2025 11:20:09 +0000 (0:00:12.187) 0:00:34.043 **** 2025-09-20 11:26:02.989753 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989763 | orchestrator | 2025-09-20 11:26:02.989774 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-20 11:26:02.989786 | orchestrator | Saturday 20 September 2025 11:20:13 +0000 (0:00:03.887) 0:00:37.931 **** 2025-09-20 11:26:02.989797 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989808 | orchestrator | 2025-09-20 11:26:02.989819 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-20 11:26:02.989830 | orchestrator | Saturday 20 September 2025 11:20:17 +0000 (0:00:04.524) 0:00:42.455 **** 2025-09-20 11:26:02.989842 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989853 | orchestrator | 2025-09-20 11:26:02.989864 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-20 11:26:02.989875 | orchestrator | Saturday 20 September 2025 11:20:21 +0000 (0:00:03.827) 0:00:46.283 **** 2025-09-20 11:26:02.989886 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989898 | orchestrator | 2025-09-20 11:26:02.989909 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-20 11:26:02.989920 | orchestrator | Saturday 20 September 2025 11:20:25 +0000 (0:00:03.501) 0:00:49.785 **** 2025-09-20 11:26:02.989931 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989943 | orchestrator | 2025-09-20 11:26:02.989954 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-20 11:26:02.989965 | orchestrator | Saturday 20 September 2025 11:20:28 +0000 (0:00:03.718) 0:00:53.504 **** 2025-09-20 11:26:02.989977 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.989988 | orchestrator | 2025-09-20 11:26:02.989999 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-20 11:26:02.990011 | orchestrator | Saturday 20 September 2025 11:20:32 +0000 (0:00:03.834) 0:00:57.338 **** 2025-09-20 11:26:02.990068 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.990079 | orchestrator | 2025-09-20 11:26:02.990090 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-20 11:26:02.990103 | orchestrator | Saturday 20 September 2025 11:20:46 +0000 (0:00:14.059) 0:01:11.398 **** 2025-09-20 11:26:02.990114 | orchestrator | changed: [localhost] => (item=test) 2025-09-20 11:26:02.990126 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-20 11:26:02.990138 | orchestrator | 2025-09-20 11:26:02.990156 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 11:26:02.990176 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-20 11:26:02.990186 | orchestrator | 2025-09-20 11:26:02.990195 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 11:26:02.990205 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-20 11:26:02.990215 | orchestrator | 2025-09-20 11:26:02.990224 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 11:26:02.990234 | orchestrator | 2025-09-20 11:26:02.990243 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 11:26:02.990253 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-20 11:26:02.990262 | orchestrator | 2025-09-20 11:26:02.990272 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-20 11:26:02.990281 | orchestrator | Saturday 20 September 2025 11:24:44 +0000 (0:03:57.827) 0:05:09.225 **** 2025-09-20 11:26:02.990291 | orchestrator | changed: [localhost] => (item=test) 2025-09-20 11:26:02.990300 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-20 11:26:02.990310 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-20 11:26:02.990323 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-20 11:26:02.990333 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-20 11:26:02.990343 | orchestrator | 2025-09-20 11:26:02.990352 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-20 11:26:02.990362 | orchestrator | Saturday 20 September 2025 11:25:06 +0000 (0:00:22.077) 0:05:31.302 **** 2025-09-20 11:26:02.990388 | orchestrator | changed: [localhost] => (item=test) 2025-09-20 11:26:02.990398 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-20 11:26:02.990408 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-20 11:26:02.990417 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-20 11:26:02.990427 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-20 11:26:02.990436 | orchestrator | 2025-09-20 11:26:02.990446 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-20 11:26:02.990456 | orchestrator | Saturday 20 September 2025 11:25:38 +0000 (0:00:31.601) 0:06:02.904 **** 2025-09-20 11:26:02.990465 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.990475 | orchestrator | 2025-09-20 11:26:02.990484 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-20 11:26:02.990494 | orchestrator | Saturday 20 September 2025 11:25:44 +0000 (0:00:06.310) 0:06:09.214 **** 2025-09-20 11:26:02.990503 | orchestrator | changed: [localhost] 2025-09-20 11:26:02.990513 | orchestrator | 2025-09-20 11:26:02.990523 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-20 11:26:02.990532 | orchestrator | Saturday 20 September 2025 11:25:57 +0000 (0:00:13.347) 0:06:22.562 **** 2025-09-20 11:26:02.990542 | orchestrator | ok: [localhost] 2025-09-20 11:26:02.990552 | orchestrator | 2025-09-20 11:26:02.990561 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-20 11:26:02.990571 | orchestrator | Saturday 20 September 2025 11:26:02 +0000 (0:00:04.798) 0:06:27.361 **** 2025-09-20 11:26:02.990580 | orchestrator | ok: [localhost] => { 2025-09-20 11:26:02.990590 | orchestrator |  "msg": "192.168.112.104" 2025-09-20 11:26:02.990600 | orchestrator | } 2025-09-20 11:26:02.990610 | orchestrator | 2025-09-20 11:26:02.990620 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 11:26:02.990630 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 11:26:02.990641 | orchestrator | 2025-09-20 11:26:02.990651 | orchestrator | 2025-09-20 11:26:02.990660 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 11:26:02.990670 | orchestrator | Saturday 20 September 2025 11:26:02 +0000 (0:00:00.046) 0:06:27.408 **** 2025-09-20 11:26:02.990696 | orchestrator | =============================================================================== 2025-09-20 11:26:02.990706 | orchestrator | Create test instances ------------------------------------------------- 237.83s 2025-09-20 11:26:02.990723 | orchestrator | Add tag to instances --------------------------------------------------- 31.60s 2025-09-20 11:26:02.990732 | orchestrator | Add metadata to instances ---------------------------------------------- 22.08s 2025-09-20 11:26:02.990751 | orchestrator | Create test network topology ------------------------------------------- 14.06s 2025-09-20 11:26:02.990761 | orchestrator | Attach test volume ----------------------------------------------------- 13.35s 2025-09-20 11:26:02.990771 | orchestrator | Add member roles to user test ------------------------------------------ 12.19s 2025-09-20 11:26:02.990781 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.50s 2025-09-20 11:26:02.990790 | orchestrator | Create test volume ------------------------------------------------------ 6.31s 2025-09-20 11:26:02.990800 | orchestrator | Create floating ip address ---------------------------------------------- 4.80s 2025-09-20 11:26:02.990810 | orchestrator | Create ssh security group ----------------------------------------------- 4.52s 2025-09-20 11:26:02.990819 | orchestrator | Create test-admin user -------------------------------------------------- 4.09s 2025-09-20 11:26:02.990829 | orchestrator | Create test user -------------------------------------------------------- 4.02s 2025-09-20 11:26:02.990838 | orchestrator | Create test project ----------------------------------------------------- 3.92s 2025-09-20 11:26:02.990848 | orchestrator | Create test server group ------------------------------------------------ 3.89s 2025-09-20 11:26:02.990858 | orchestrator | Create test keypair ----------------------------------------------------- 3.83s 2025-09-20 11:26:02.990867 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.83s 2025-09-20 11:26:02.990877 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.72s 2025-09-20 11:26:02.990886 | orchestrator | Create icmp security group ---------------------------------------------- 3.50s 2025-09-20 11:26:02.990896 | orchestrator | Create test domain ------------------------------------------------------ 3.26s 2025-09-20 11:26:02.990906 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-09-20 11:26:03.200262 | orchestrator | + server_list 2025-09-20 11:26:03.200370 | orchestrator | + openstack --os-cloud test server list 2025-09-20 11:26:06.479272 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-20 11:26:06.479386 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-20 11:26:06.479402 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-20 11:26:06.479414 | orchestrator | | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | test-4 | ACTIVE | auto_allocated_network=10.42.0.49, 192.168.112.137 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 11:26:06.479425 | orchestrator | | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | test-3 | ACTIVE | auto_allocated_network=10.42.0.30, 192.168.112.135 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 11:26:06.479437 | orchestrator | | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | test-2 | ACTIVE | auto_allocated_network=10.42.0.27, 192.168.112.116 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 11:26:06.479449 | orchestrator | | d44177f7-00e6-40b1-9e10-3388faf842b9 | test-1 | ACTIVE | auto_allocated_network=10.42.0.41, 192.168.112.197 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 11:26:06.479460 | orchestrator | | ba383266-425b-4d74-b766-c13e936ea5bf | test | ACTIVE | auto_allocated_network=10.42.0.29, 192.168.112.104 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 11:26:06.479471 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-20 11:26:06.785390 | orchestrator | + openstack --os-cloud test server show test 2025-09-20 11:26:10.219000 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:10.219135 | orchestrator | | Field | Value | 2025-09-20 11:26:10.219156 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:10.219168 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 11:26:10.219179 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 11:26:10.219189 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 11:26:10.219200 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-20 11:26:10.219210 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 11:26:10.219220 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 11:26:10.219248 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 11:26:10.219265 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 11:26:10.219276 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 11:26:10.219289 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 11:26:10.219300 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 11:26:10.219310 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 11:26:10.219320 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 11:26:10.219331 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 11:26:10.219341 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 11:26:10.219351 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T11:21:27.000000 | 2025-09-20 11:26:10.219379 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 11:26:10.219390 | orchestrator | | accessIPv4 | | 2025-09-20 11:26:10.219400 | orchestrator | | accessIPv6 | | 2025-09-20 11:26:10.219413 | orchestrator | | addresses | auto_allocated_network=10.42.0.29, 192.168.112.104 | 2025-09-20 11:26:10.219424 | orchestrator | | config_drive | | 2025-09-20 11:26:10.219434 | orchestrator | | created | 2025-09-20T11:20:54Z | 2025-09-20 11:26:10.219443 | orchestrator | | description | None | 2025-09-20 11:26:10.219454 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 11:26:10.219463 | orchestrator | | hostId | dbdf1e2a8e6a76b68dab6a9e307c3d654e331c8e573ef92e00c19b7b | 2025-09-20 11:26:10.219473 | orchestrator | | host_status | None | 2025-09-20 11:26:10.219496 | orchestrator | | id | ba383266-425b-4d74-b766-c13e936ea5bf | 2025-09-20 11:26:10.219507 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 11:26:10.219517 | orchestrator | | key_name | test | 2025-09-20 11:26:10.219527 | orchestrator | | locked | False | 2025-09-20 11:26:10.219537 | orchestrator | | locked_reason | None | 2025-09-20 11:26:10.219547 | orchestrator | | name | test | 2025-09-20 11:26:10.219557 | orchestrator | | pinned_availability_zone | None | 2025-09-20 11:26:10.219567 | orchestrator | | progress | 0 | 2025-09-20 11:26:10.219577 | orchestrator | | project_id | 16cb2673363d4cab80a359d3e9fe9dc9 | 2025-09-20 11:26:10.219593 | orchestrator | | properties | hostname='test' | 2025-09-20 11:26:10.219609 | orchestrator | | security_groups | name='icmp' | 2025-09-20 11:26:10.219626 | orchestrator | | | name='ssh' | 2025-09-20 11:26:10.219636 | orchestrator | | server_groups | None | 2025-09-20 11:26:10.219678 | orchestrator | | status | ACTIVE | 2025-09-20 11:26:10.219690 | orchestrator | | tags | test | 2025-09-20 11:26:10.219700 | orchestrator | | trusted_image_certificates | None | 2025-09-20 11:26:10.219710 | orchestrator | | updated | 2025-09-20T11:24:48Z | 2025-09-20 11:26:10.219720 | orchestrator | | user_id | ac93b9d958aa412688d85a88495a090c | 2025-09-20 11:26:10.219737 | orchestrator | | volumes_attached | delete_on_termination='True', id='e3dd51d7-bec8-43fe-b4ae-475604a3e729' | 2025-09-20 11:26:10.219747 | orchestrator | | | delete_on_termination='False', id='2c3d169c-addc-455e-9607-904908415aa3' | 2025-09-20 11:26:10.221997 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:10.515257 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-20 11:26:13.697159 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:13.697263 | orchestrator | | Field | Value | 2025-09-20 11:26:13.697278 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:13.697290 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 11:26:13.697300 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 11:26:13.697310 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 11:26:13.697338 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-20 11:26:13.697350 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 11:26:13.697361 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 11:26:13.697387 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 11:26:13.697398 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 11:26:13.697413 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 11:26:13.697423 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 11:26:13.697432 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 11:26:13.697443 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 11:26:13.697460 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 11:26:13.697470 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 11:26:13.697480 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 11:26:13.697490 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T11:22:21.000000 | 2025-09-20 11:26:13.697506 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 11:26:13.697516 | orchestrator | | accessIPv4 | | 2025-09-20 11:26:13.697530 | orchestrator | | accessIPv6 | | 2025-09-20 11:26:13.697540 | orchestrator | | addresses | auto_allocated_network=10.42.0.41, 192.168.112.197 | 2025-09-20 11:26:13.697549 | orchestrator | | config_drive | | 2025-09-20 11:26:13.697559 | orchestrator | | created | 2025-09-20T11:21:48Z | 2025-09-20 11:26:13.697574 | orchestrator | | description | None | 2025-09-20 11:26:13.697584 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 11:26:13.697594 | orchestrator | | hostId | 16d90b94ea96a0fc318834fc52673cf9148917b8997df38f1cccdf97 | 2025-09-20 11:26:13.697604 | orchestrator | | host_status | None | 2025-09-20 11:26:13.697620 | orchestrator | | id | d44177f7-00e6-40b1-9e10-3388faf842b9 | 2025-09-20 11:26:13.697631 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 11:26:13.697667 | orchestrator | | key_name | test | 2025-09-20 11:26:13.697678 | orchestrator | | locked | False | 2025-09-20 11:26:13.697689 | orchestrator | | locked_reason | None | 2025-09-20 11:26:13.697704 | orchestrator | | name | test-1 | 2025-09-20 11:26:13.697715 | orchestrator | | pinned_availability_zone | None | 2025-09-20 11:26:13.697725 | orchestrator | | progress | 0 | 2025-09-20 11:26:13.697734 | orchestrator | | project_id | 16cb2673363d4cab80a359d3e9fe9dc9 | 2025-09-20 11:26:13.697743 | orchestrator | | properties | hostname='test-1' | 2025-09-20 11:26:13.697760 | orchestrator | | security_groups | name='icmp' | 2025-09-20 11:26:13.697771 | orchestrator | | | name='ssh' | 2025-09-20 11:26:13.697781 | orchestrator | | server_groups | None | 2025-09-20 11:26:13.697791 | orchestrator | | status | ACTIVE | 2025-09-20 11:26:13.697814 | orchestrator | | tags | test | 2025-09-20 11:26:13.697831 | orchestrator | | trusted_image_certificates | None | 2025-09-20 11:26:13.697841 | orchestrator | | updated | 2025-09-20T11:24:53Z | 2025-09-20 11:26:13.697851 | orchestrator | | user_id | ac93b9d958aa412688d85a88495a090c | 2025-09-20 11:26:13.697861 | orchestrator | | volumes_attached | delete_on_termination='True', id='ed36c071-20f9-4352-953a-0227611c9fe4' | 2025-09-20 11:26:13.701329 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:13.981512 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-20 11:26:16.908378 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:16.908508 | orchestrator | | Field | Value | 2025-09-20 11:26:16.908535 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:16.908568 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 11:26:16.908578 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 11:26:16.908588 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 11:26:16.908598 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-20 11:26:16.908608 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 11:26:16.908618 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 11:26:16.908685 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 11:26:16.908698 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 11:26:16.908708 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 11:26:16.908722 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 11:26:16.908740 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 11:26:16.908750 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 11:26:16.908789 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 11:26:16.908800 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 11:26:16.908810 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 11:26:16.908820 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T11:23:10.000000 | 2025-09-20 11:26:16.908837 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 11:26:16.908848 | orchestrator | | accessIPv4 | | 2025-09-20 11:26:16.908858 | orchestrator | | accessIPv6 | | 2025-09-20 11:26:16.908878 | orchestrator | | addresses | auto_allocated_network=10.42.0.27, 192.168.112.116 | 2025-09-20 11:26:16.908891 | orchestrator | | config_drive | | 2025-09-20 11:26:16.908903 | orchestrator | | created | 2025-09-20T11:22:37Z | 2025-09-20 11:26:16.908914 | orchestrator | | description | None | 2025-09-20 11:26:16.908925 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 11:26:16.908936 | orchestrator | | hostId | 497249bfb258485434643108b1c47d508147678f82778decdcd4f2c2 | 2025-09-20 11:26:16.908948 | orchestrator | | host_status | None | 2025-09-20 11:26:16.908966 | orchestrator | | id | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | 2025-09-20 11:26:16.908978 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 11:26:16.909000 | orchestrator | | key_name | test | 2025-09-20 11:26:16.909012 | orchestrator | | locked | False | 2025-09-20 11:26:16.909024 | orchestrator | | locked_reason | None | 2025-09-20 11:26:16.909035 | orchestrator | | name | test-2 | 2025-09-20 11:26:16.909047 | orchestrator | | pinned_availability_zone | None | 2025-09-20 11:26:16.909058 | orchestrator | | progress | 0 | 2025-09-20 11:26:16.909070 | orchestrator | | project_id | 16cb2673363d4cab80a359d3e9fe9dc9 | 2025-09-20 11:26:16.909081 | orchestrator | | properties | hostname='test-2' | 2025-09-20 11:26:16.909097 | orchestrator | | security_groups | name='icmp' | 2025-09-20 11:26:16.909113 | orchestrator | | | name='ssh' | 2025-09-20 11:26:16.909127 | orchestrator | | server_groups | None | 2025-09-20 11:26:16.909137 | orchestrator | | status | ACTIVE | 2025-09-20 11:26:16.909147 | orchestrator | | tags | test | 2025-09-20 11:26:16.909157 | orchestrator | | trusted_image_certificates | None | 2025-09-20 11:26:16.909167 | orchestrator | | updated | 2025-09-20T11:24:57Z | 2025-09-20 11:26:16.909177 | orchestrator | | user_id | ac93b9d958aa412688d85a88495a090c | 2025-09-20 11:26:16.909187 | orchestrator | | volumes_attached | delete_on_termination='True', id='ccd6533f-ab63-43c7-b7f0-8cb4e86d50ff' | 2025-09-20 11:26:16.913186 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:17.201992 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-20 11:26:20.200876 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:20.200979 | orchestrator | | Field | Value | 2025-09-20 11:26:20.200993 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:20.201004 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 11:26:20.201014 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 11:26:20.201025 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 11:26:20.201052 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-20 11:26:20.201063 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 11:26:20.201073 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 11:26:20.201099 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 11:26:20.201129 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 11:26:20.201140 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 11:26:20.201155 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 11:26:20.201166 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 11:26:20.201176 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 11:26:20.201186 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 11:26:20.201196 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 11:26:20.201206 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 11:26:20.201216 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T11:23:50.000000 | 2025-09-20 11:26:20.201246 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 11:26:20.201257 | orchestrator | | accessIPv4 | | 2025-09-20 11:26:20.201267 | orchestrator | | accessIPv6 | | 2025-09-20 11:26:20.201281 | orchestrator | | addresses | auto_allocated_network=10.42.0.30, 192.168.112.135 | 2025-09-20 11:26:20.201292 | orchestrator | | config_drive | | 2025-09-20 11:26:20.201302 | orchestrator | | created | 2025-09-20T11:23:25Z | 2025-09-20 11:26:20.201312 | orchestrator | | description | None | 2025-09-20 11:26:20.201322 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 11:26:20.201332 | orchestrator | | hostId | 497249bfb258485434643108b1c47d508147678f82778decdcd4f2c2 | 2025-09-20 11:26:20.201348 | orchestrator | | host_status | None | 2025-09-20 11:26:20.201364 | orchestrator | | id | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | 2025-09-20 11:26:20.201375 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 11:26:20.201385 | orchestrator | | key_name | test | 2025-09-20 11:26:20.201399 | orchestrator | | locked | False | 2025-09-20 11:26:20.201409 | orchestrator | | locked_reason | None | 2025-09-20 11:26:20.201421 | orchestrator | | name | test-3 | 2025-09-20 11:26:20.201433 | orchestrator | | pinned_availability_zone | None | 2025-09-20 11:26:20.201443 | orchestrator | | progress | 0 | 2025-09-20 11:26:20.201460 | orchestrator | | project_id | 16cb2673363d4cab80a359d3e9fe9dc9 | 2025-09-20 11:26:20.201471 | orchestrator | | properties | hostname='test-3' | 2025-09-20 11:26:20.201489 | orchestrator | | security_groups | name='icmp' | 2025-09-20 11:26:20.201502 | orchestrator | | | name='ssh' | 2025-09-20 11:26:20.201513 | orchestrator | | server_groups | None | 2025-09-20 11:26:20.201529 | orchestrator | | status | ACTIVE | 2025-09-20 11:26:20.201541 | orchestrator | | tags | test | 2025-09-20 11:26:20.201552 | orchestrator | | trusted_image_certificates | None | 2025-09-20 11:26:20.201563 | orchestrator | | updated | 2025-09-20T11:25:02Z | 2025-09-20 11:26:20.201575 | orchestrator | | user_id | ac93b9d958aa412688d85a88495a090c | 2025-09-20 11:26:20.201592 | orchestrator | | volumes_attached | delete_on_termination='True', id='ad5568ff-2964-4f22-a575-b2104ae11dd3' | 2025-09-20 11:26:20.205408 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:20.492879 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-20 11:26:23.518274 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:23.518383 | orchestrator | | Field | Value | 2025-09-20 11:26:23.518415 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:23.518428 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 11:26:23.518440 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 11:26:23.518452 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 11:26:23.518464 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-20 11:26:23.518496 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 11:26:23.518508 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 11:26:23.518541 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 11:26:23.518554 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 11:26:23.518566 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 11:26:23.518582 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 11:26:23.518594 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 11:26:23.518606 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 11:26:23.518618 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 11:26:23.518671 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 11:26:23.518683 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 11:26:23.518694 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T11:24:33.000000 | 2025-09-20 11:26:23.518712 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 11:26:23.518724 | orchestrator | | accessIPv4 | | 2025-09-20 11:26:23.518735 | orchestrator | | accessIPv6 | | 2025-09-20 11:26:23.518747 | orchestrator | | addresses | auto_allocated_network=10.42.0.49, 192.168.112.137 | 2025-09-20 11:26:23.518758 | orchestrator | | config_drive | | 2025-09-20 11:26:23.518769 | orchestrator | | created | 2025-09-20T11:24:08Z | 2025-09-20 11:26:23.519195 | orchestrator | | description | None | 2025-09-20 11:26:23.519211 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 11:26:23.519223 | orchestrator | | hostId | 16d90b94ea96a0fc318834fc52673cf9148917b8997df38f1cccdf97 | 2025-09-20 11:26:23.519234 | orchestrator | | host_status | None | 2025-09-20 11:26:23.519260 | orchestrator | | id | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | 2025-09-20 11:26:23.519272 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 11:26:23.519284 | orchestrator | | key_name | test | 2025-09-20 11:26:23.519295 | orchestrator | | locked | False | 2025-09-20 11:26:23.519306 | orchestrator | | locked_reason | None | 2025-09-20 11:26:23.519324 | orchestrator | | name | test-4 | 2025-09-20 11:26:23.519336 | orchestrator | | pinned_availability_zone | None | 2025-09-20 11:26:23.519347 | orchestrator | | progress | 0 | 2025-09-20 11:26:23.519358 | orchestrator | | project_id | 16cb2673363d4cab80a359d3e9fe9dc9 | 2025-09-20 11:26:23.519369 | orchestrator | | properties | hostname='test-4' | 2025-09-20 11:26:23.519392 | orchestrator | | security_groups | name='icmp' | 2025-09-20 11:26:23.519405 | orchestrator | | | name='ssh' | 2025-09-20 11:26:23.519416 | orchestrator | | server_groups | None | 2025-09-20 11:26:23.519427 | orchestrator | | status | ACTIVE | 2025-09-20 11:26:23.519438 | orchestrator | | tags | test | 2025-09-20 11:26:23.519456 | orchestrator | | trusted_image_certificates | None | 2025-09-20 11:26:23.519468 | orchestrator | | updated | 2025-09-20T11:25:06Z | 2025-09-20 11:26:23.519479 | orchestrator | | user_id | ac93b9d958aa412688d85a88495a090c | 2025-09-20 11:26:23.519491 | orchestrator | | volumes_attached | delete_on_termination='True', id='a8f608fa-fde8-45d8-be67-a5b3c11478c3' | 2025-09-20 11:26:23.522725 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 11:26:23.842882 | orchestrator | + server_ping 2025-09-20 11:26:23.844335 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 11:26:23.844555 | orchestrator | ++ tr -d '\r' 2025-09-20 11:26:26.856446 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:26:26.856577 | orchestrator | + ping -c3 192.168.112.104 2025-09-20 11:26:26.870421 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2025-09-20 11:26:26.870532 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=6.93 ms 2025-09-20 11:26:27.866300 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.37 ms 2025-09-20 11:26:28.867336 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.82 ms 2025-09-20 11:26:28.867461 | orchestrator | 2025-09-20 11:26:28.867477 | orchestrator | --- 192.168.112.104 ping statistics --- 2025-09-20 11:26:28.867502 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 11:26:28.867515 | orchestrator | rtt min/avg/max/mdev = 1.822/3.706/6.928/2.288 ms 2025-09-20 11:26:28.868176 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:26:28.868212 | orchestrator | + ping -c3 192.168.112.116 2025-09-20 11:26:28.877297 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-09-20 11:26:28.877362 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=5.13 ms 2025-09-20 11:26:29.875783 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.99 ms 2025-09-20 11:26:30.878003 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.64 ms 2025-09-20 11:26:30.878160 | orchestrator | 2025-09-20 11:26:30.878176 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-09-20 11:26:30.878189 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:26:30.878200 | orchestrator | rtt min/avg/max/mdev = 1.986/3.252/5.132/1.355 ms 2025-09-20 11:26:30.878955 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:26:30.878979 | orchestrator | + ping -c3 192.168.112.135 2025-09-20 11:26:30.892470 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2025-09-20 11:26:30.892512 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=8.55 ms 2025-09-20 11:26:31.889492 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=3.14 ms 2025-09-20 11:26:32.889497 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.71 ms 2025-09-20 11:26:32.889600 | orchestrator | 2025-09-20 11:26:32.889753 | orchestrator | --- 192.168.112.135 ping statistics --- 2025-09-20 11:26:32.889773 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 11:26:32.889785 | orchestrator | rtt min/avg/max/mdev = 1.705/4.464/8.545/2.944 ms 2025-09-20 11:26:32.889809 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:26:32.889821 | orchestrator | + ping -c3 192.168.112.197 2025-09-20 11:26:32.902299 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2025-09-20 11:26:32.902419 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.54 ms 2025-09-20 11:26:33.899588 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.76 ms 2025-09-20 11:26:34.900230 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.60 ms 2025-09-20 11:26:34.900342 | orchestrator | 2025-09-20 11:26:34.900358 | orchestrator | --- 192.168.112.197 ping statistics --- 2025-09-20 11:26:34.900370 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:26:34.900381 | orchestrator | rtt min/avg/max/mdev = 1.600/3.968/7.544/2.572 ms 2025-09-20 11:26:34.900393 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:26:34.900404 | orchestrator | + ping -c3 192.168.112.137 2025-09-20 11:26:34.910209 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2025-09-20 11:26:34.910279 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.45 ms 2025-09-20 11:26:35.908051 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.32 ms 2025-09-20 11:26:36.909393 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.75 ms 2025-09-20 11:26:36.909611 | orchestrator | 2025-09-20 11:26:36.909688 | orchestrator | --- 192.168.112.137 ping statistics --- 2025-09-20 11:26:36.909702 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:26:36.909714 | orchestrator | rtt min/avg/max/mdev = 1.747/3.507/6.451/2.094 ms 2025-09-20 11:26:36.909737 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 11:26:36.909749 | orchestrator | + compute_list 2025-09-20 11:26:36.909761 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 11:26:40.248706 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:26:40.248841 | orchestrator | | ID | Name | Status | 2025-09-20 11:26:40.248856 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:26:40.248869 | orchestrator | | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | test-4 | ACTIVE | 2025-09-20 11:26:40.248880 | orchestrator | | d44177f7-00e6-40b1-9e10-3388faf842b9 | test-1 | ACTIVE | 2025-09-20 11:26:40.248891 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:26:40.558460 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 11:26:44.116685 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:26:44.116774 | orchestrator | | ID | Name | Status | 2025-09-20 11:26:44.116784 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:26:44.116793 | orchestrator | | ba383266-425b-4d74-b766-c13e936ea5bf | test | ACTIVE | 2025-09-20 11:26:44.116801 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:26:44.450308 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 11:26:47.944240 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:26:47.944355 | orchestrator | | ID | Name | Status | 2025-09-20 11:26:47.944370 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:26:47.944381 | orchestrator | | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | test-3 | ACTIVE | 2025-09-20 11:26:47.944392 | orchestrator | | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | test-2 | ACTIVE | 2025-09-20 11:26:47.944403 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:26:48.316806 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-09-20 11:26:51.504592 | orchestrator | 2025-09-20 11:26:51 | INFO  | Live migrating server ba383266-425b-4d74-b766-c13e936ea5bf 2025-09-20 11:27:04.175134 | orchestrator | 2025-09-20 11:27:04 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:06.554326 | orchestrator | 2025-09-20 11:27:06 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:08.990819 | orchestrator | 2025-09-20 11:27:08 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:11.492457 | orchestrator | 2025-09-20 11:27:11 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:14.209214 | orchestrator | 2025-09-20 11:27:14 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:16.574308 | orchestrator | 2025-09-20 11:27:16 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:19.077573 | orchestrator | 2025-09-20 11:27:19 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:21.356884 | orchestrator | 2025-09-20 11:27:21 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:23.690299 | orchestrator | 2025-09-20 11:27:23 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:26.026314 | orchestrator | 2025-09-20 11:27:26 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:27:28.295237 | orchestrator | 2025-09-20 11:27:28 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) completed with status ACTIVE 2025-09-20 11:27:28.731079 | orchestrator | + compute_list 2025-09-20 11:27:28.731167 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 11:27:32.048261 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:27:32.048348 | orchestrator | | ID | Name | Status | 2025-09-20 11:27:32.048358 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:27:32.048365 | orchestrator | | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | test-4 | ACTIVE | 2025-09-20 11:27:32.048371 | orchestrator | | d44177f7-00e6-40b1-9e10-3388faf842b9 | test-1 | ACTIVE | 2025-09-20 11:27:32.048378 | orchestrator | | ba383266-425b-4d74-b766-c13e936ea5bf | test | ACTIVE | 2025-09-20 11:27:32.048384 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:27:32.463044 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 11:27:35.323681 | orchestrator | +------+--------+----------+ 2025-09-20 11:27:35.323800 | orchestrator | | ID | Name | Status | 2025-09-20 11:27:35.323810 | orchestrator | |------+--------+----------| 2025-09-20 11:27:35.323818 | orchestrator | +------+--------+----------+ 2025-09-20 11:27:35.695556 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 11:27:38.898984 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:27:38.899102 | orchestrator | | ID | Name | Status | 2025-09-20 11:27:38.899143 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:27:38.899156 | orchestrator | | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | test-3 | ACTIVE | 2025-09-20 11:27:38.899168 | orchestrator | | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | test-2 | ACTIVE | 2025-09-20 11:27:38.899179 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:27:39.173108 | orchestrator | + server_ping 2025-09-20 11:27:39.173814 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 11:27:39.174333 | orchestrator | ++ tr -d '\r' 2025-09-20 11:27:41.821533 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:27:41.821669 | orchestrator | + ping -c3 192.168.112.104 2025-09-20 11:27:41.834536 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2025-09-20 11:27:41.834600 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=9.33 ms 2025-09-20 11:27:42.829498 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.08 ms 2025-09-20 11:27:43.831802 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.93 ms 2025-09-20 11:27:43.831905 | orchestrator | 2025-09-20 11:27:43.831920 | orchestrator | --- 192.168.112.104 ping statistics --- 2025-09-20 11:27:43.831933 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:27:43.831944 | orchestrator | rtt min/avg/max/mdev = 1.934/4.445/9.325/3.451 ms 2025-09-20 11:27:43.831956 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:27:43.831969 | orchestrator | + ping -c3 192.168.112.116 2025-09-20 11:27:43.841826 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-09-20 11:27:43.841899 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=5.92 ms 2025-09-20 11:27:44.839571 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.41 ms 2025-09-20 11:27:45.840240 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.78 ms 2025-09-20 11:27:45.840321 | orchestrator | 2025-09-20 11:27:45.840670 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-09-20 11:27:45.840685 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 11:27:45.840692 | orchestrator | rtt min/avg/max/mdev = 1.776/3.368/5.918/1.821 ms 2025-09-20 11:27:45.841437 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:27:45.841450 | orchestrator | + ping -c3 192.168.112.135 2025-09-20 11:27:45.853417 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2025-09-20 11:27:45.853505 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=9.65 ms 2025-09-20 11:27:46.847917 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=1.99 ms 2025-09-20 11:27:47.849214 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.61 ms 2025-09-20 11:27:47.849314 | orchestrator | 2025-09-20 11:27:47.849329 | orchestrator | --- 192.168.112.135 ping statistics --- 2025-09-20 11:27:47.849342 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:27:47.849353 | orchestrator | rtt min/avg/max/mdev = 1.607/4.414/9.647/3.703 ms 2025-09-20 11:27:47.849364 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:27:47.849376 | orchestrator | + ping -c3 192.168.112.197 2025-09-20 11:27:47.861822 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2025-09-20 11:27:47.861912 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.85 ms 2025-09-20 11:27:48.857523 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.38 ms 2025-09-20 11:27:49.858500 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.55 ms 2025-09-20 11:27:49.858693 | orchestrator | 2025-09-20 11:27:49.858713 | orchestrator | --- 192.168.112.197 ping statistics --- 2025-09-20 11:27:49.858726 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:27:49.858737 | orchestrator | rtt min/avg/max/mdev = 1.545/3.925/7.852/2.797 ms 2025-09-20 11:27:49.859165 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:27:49.859224 | orchestrator | + ping -c3 192.168.112.137 2025-09-20 11:27:49.869398 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2025-09-20 11:27:49.869428 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.48 ms 2025-09-20 11:27:50.867483 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.44 ms 2025-09-20 11:27:51.869527 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.87 ms 2025-09-20 11:27:51.869699 | orchestrator | 2025-09-20 11:27:51.869717 | orchestrator | --- 192.168.112.137 ping statistics --- 2025-09-20 11:27:51.869730 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 11:27:51.869741 | orchestrator | rtt min/avg/max/mdev = 1.869/3.596/6.483/2.054 ms 2025-09-20 11:27:51.869752 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-09-20 11:27:54.839498 | orchestrator | 2025-09-20 11:27:54 | INFO  | Live migrating server cb2c2c27-309e-4c39-b57b-cb15c72dbf8f 2025-09-20 11:28:07.020110 | orchestrator | 2025-09-20 11:28:07 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:09.366957 | orchestrator | 2025-09-20 11:28:09 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:11.676222 | orchestrator | 2025-09-20 11:28:11 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:14.037781 | orchestrator | 2025-09-20 11:28:14 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:16.393550 | orchestrator | 2025-09-20 11:28:16 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:18.658827 | orchestrator | 2025-09-20 11:28:18 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:21.012220 | orchestrator | 2025-09-20 11:28:21 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:23.285573 | orchestrator | 2025-09-20 11:28:23 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:28:25.575809 | orchestrator | 2025-09-20 11:28:25 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) completed with status ACTIVE 2025-09-20 11:28:25.575886 | orchestrator | 2025-09-20 11:28:25 | INFO  | Live migrating server 8be32264-c85a-485a-bc17-6b05e3d5e2ea 2025-09-20 11:28:37.980557 | orchestrator | 2025-09-20 11:28:37 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:40.302679 | orchestrator | 2025-09-20 11:28:40 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:42.619793 | orchestrator | 2025-09-20 11:28:42 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:44.872319 | orchestrator | 2025-09-20 11:28:44 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:47.395797 | orchestrator | 2025-09-20 11:28:47 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:49.631748 | orchestrator | 2025-09-20 11:28:49 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:51.895495 | orchestrator | 2025-09-20 11:28:51 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:54.242472 | orchestrator | 2025-09-20 11:28:54 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:28:56.524101 | orchestrator | 2025-09-20 11:28:56 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) completed with status ACTIVE 2025-09-20 11:28:56.852328 | orchestrator | + compute_list 2025-09-20 11:28:56.852420 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 11:29:00.072954 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:29:00.073070 | orchestrator | | ID | Name | Status | 2025-09-20 11:29:00.073086 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:29:00.073098 | orchestrator | | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | test-4 | ACTIVE | 2025-09-20 11:29:00.073109 | orchestrator | | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | test-3 | ACTIVE | 2025-09-20 11:29:00.073119 | orchestrator | | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | test-2 | ACTIVE | 2025-09-20 11:29:00.073131 | orchestrator | | d44177f7-00e6-40b1-9e10-3388faf842b9 | test-1 | ACTIVE | 2025-09-20 11:29:00.073142 | orchestrator | | ba383266-425b-4d74-b766-c13e936ea5bf | test | ACTIVE | 2025-09-20 11:29:00.073152 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:29:00.539450 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 11:29:03.292148 | orchestrator | +------+--------+----------+ 2025-09-20 11:29:03.292254 | orchestrator | | ID | Name | Status | 2025-09-20 11:29:03.292268 | orchestrator | |------+--------+----------| 2025-09-20 11:29:03.292280 | orchestrator | +------+--------+----------+ 2025-09-20 11:29:03.637893 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 11:29:06.469781 | orchestrator | +------+--------+----------+ 2025-09-20 11:29:06.469890 | orchestrator | | ID | Name | Status | 2025-09-20 11:29:06.469904 | orchestrator | |------+--------+----------| 2025-09-20 11:29:06.469916 | orchestrator | +------+--------+----------+ 2025-09-20 11:29:06.702787 | orchestrator | + server_ping 2025-09-20 11:29:06.703170 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 11:29:06.703811 | orchestrator | ++ tr -d '\r' 2025-09-20 11:29:09.426674 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:29:09.426774 | orchestrator | + ping -c3 192.168.112.104 2025-09-20 11:29:09.438505 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2025-09-20 11:29:09.438584 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=9.33 ms 2025-09-20 11:29:10.433307 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.48 ms 2025-09-20 11:29:11.434300 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.34 ms 2025-09-20 11:29:11.434407 | orchestrator | 2025-09-20 11:29:11.434421 | orchestrator | --- 192.168.112.104 ping statistics --- 2025-09-20 11:29:11.434433 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:29:11.434592 | orchestrator | rtt min/avg/max/mdev = 1.343/4.382/9.328/3.527 ms 2025-09-20 11:29:11.434663 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:29:11.434677 | orchestrator | + ping -c3 192.168.112.116 2025-09-20 11:29:11.441553 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-09-20 11:29:11.441579 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=4.66 ms 2025-09-20 11:29:12.441720 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.49 ms 2025-09-20 11:29:13.443584 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.82 ms 2025-09-20 11:29:13.443859 | orchestrator | 2025-09-20 11:29:13.443881 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-09-20 11:29:13.443892 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 11:29:13.443902 | orchestrator | rtt min/avg/max/mdev = 1.819/2.988/4.655/1.210 ms 2025-09-20 11:29:13.443922 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:29:13.443932 | orchestrator | + ping -c3 192.168.112.135 2025-09-20 11:29:13.456106 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2025-09-20 11:29:13.456182 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=7.32 ms 2025-09-20 11:29:14.452868 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.30 ms 2025-09-20 11:29:15.453113 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.35 ms 2025-09-20 11:29:15.453407 | orchestrator | 2025-09-20 11:29:15.453428 | orchestrator | --- 192.168.112.135 ping statistics --- 2025-09-20 11:29:15.453439 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:29:15.453449 | orchestrator | rtt min/avg/max/mdev = 1.345/3.656/7.322/2.621 ms 2025-09-20 11:29:15.453471 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:29:15.453481 | orchestrator | + ping -c3 192.168.112.197 2025-09-20 11:29:15.461762 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2025-09-20 11:29:15.461805 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=4.16 ms 2025-09-20 11:29:16.461759 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.27 ms 2025-09-20 11:29:17.463165 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=2.00 ms 2025-09-20 11:29:17.463290 | orchestrator | 2025-09-20 11:29:17.463307 | orchestrator | --- 192.168.112.197 ping statistics --- 2025-09-20 11:29:17.463320 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:29:17.463331 | orchestrator | rtt min/avg/max/mdev = 2.004/2.809/4.159/0.960 ms 2025-09-20 11:29:17.464067 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:29:17.464099 | orchestrator | + ping -c3 192.168.112.137 2025-09-20 11:29:17.476329 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2025-09-20 11:29:17.476421 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.60 ms 2025-09-20 11:29:18.473973 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.14 ms 2025-09-20 11:29:19.474923 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.76 ms 2025-09-20 11:29:19.475016 | orchestrator | 2025-09-20 11:29:19.475026 | orchestrator | --- 192.168.112.137 ping statistics --- 2025-09-20 11:29:19.475034 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:29:19.475041 | orchestrator | rtt min/avg/max/mdev = 1.757/3.499/6.603/2.200 ms 2025-09-20 11:29:19.475460 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-09-20 11:29:22.530552 | orchestrator | 2025-09-20 11:29:22 | INFO  | Live migrating server 12bbf371-7c02-4926-9ba2-33e9d11a2031 2025-09-20 11:29:35.346863 | orchestrator | 2025-09-20 11:29:35 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:37.703190 | orchestrator | 2025-09-20 11:29:37 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:40.058856 | orchestrator | 2025-09-20 11:29:40 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:42.350284 | orchestrator | 2025-09-20 11:29:42 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:44.688458 | orchestrator | 2025-09-20 11:29:44 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:46.951721 | orchestrator | 2025-09-20 11:29:46 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:49.285253 | orchestrator | 2025-09-20 11:29:49 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:51.603135 | orchestrator | 2025-09-20 11:29:51 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:29:53.878122 | orchestrator | 2025-09-20 11:29:53 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) completed with status ACTIVE 2025-09-20 11:29:53.878242 | orchestrator | 2025-09-20 11:29:53 | INFO  | Live migrating server cb2c2c27-309e-4c39-b57b-cb15c72dbf8f 2025-09-20 11:30:04.819575 | orchestrator | 2025-09-20 11:30:04 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:07.081270 | orchestrator | 2025-09-20 11:30:07 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:09.416397 | orchestrator | 2025-09-20 11:30:09 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:11.727519 | orchestrator | 2025-09-20 11:30:11 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:13.956422 | orchestrator | 2025-09-20 11:30:13 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:16.198878 | orchestrator | 2025-09-20 11:30:16 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:18.457803 | orchestrator | 2025-09-20 11:30:18 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:20.717221 | orchestrator | 2025-09-20 11:30:20 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:30:23.001575 | orchestrator | 2025-09-20 11:30:22 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) completed with status ACTIVE 2025-09-20 11:30:23.001774 | orchestrator | 2025-09-20 11:30:22 | INFO  | Live migrating server 8be32264-c85a-485a-bc17-6b05e3d5e2ea 2025-09-20 11:30:33.209282 | orchestrator | 2025-09-20 11:30:33 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:35.524485 | orchestrator | 2025-09-20 11:30:35 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:37.909152 | orchestrator | 2025-09-20 11:30:37 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:40.221938 | orchestrator | 2025-09-20 11:30:40 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:42.585788 | orchestrator | 2025-09-20 11:30:42 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:44.876231 | orchestrator | 2025-09-20 11:30:44 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:47.200814 | orchestrator | 2025-09-20 11:30:47 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:49.463916 | orchestrator | 2025-09-20 11:30:49 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:30:51.745567 | orchestrator | 2025-09-20 11:30:51 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) completed with status ACTIVE 2025-09-20 11:30:51.745731 | orchestrator | 2025-09-20 11:30:51 | INFO  | Live migrating server d44177f7-00e6-40b1-9e10-3388faf842b9 2025-09-20 11:31:01.979369 | orchestrator | 2025-09-20 11:31:01 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:04.296718 | orchestrator | 2025-09-20 11:31:04 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:06.718312 | orchestrator | 2025-09-20 11:31:06 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:09.207057 | orchestrator | 2025-09-20 11:31:09 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:11.435955 | orchestrator | 2025-09-20 11:31:11 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:13.705697 | orchestrator | 2025-09-20 11:31:13 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:15.942597 | orchestrator | 2025-09-20 11:31:15 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:18.249605 | orchestrator | 2025-09-20 11:31:18 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:31:20.579767 | orchestrator | 2025-09-20 11:31:20 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) completed with status ACTIVE 2025-09-20 11:31:20.579872 | orchestrator | 2025-09-20 11:31:20 | INFO  | Live migrating server ba383266-425b-4d74-b766-c13e936ea5bf 2025-09-20 11:31:30.260086 | orchestrator | 2025-09-20 11:31:30 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:32.605264 | orchestrator | 2025-09-20 11:31:32 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:34.951057 | orchestrator | 2025-09-20 11:31:34 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:37.284814 | orchestrator | 2025-09-20 11:31:37 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:39.634341 | orchestrator | 2025-09-20 11:31:39 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:41.986398 | orchestrator | 2025-09-20 11:31:41 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:44.412078 | orchestrator | 2025-09-20 11:31:44 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:46.734144 | orchestrator | 2025-09-20 11:31:46 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:49.164468 | orchestrator | 2025-09-20 11:31:49 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:51.427889 | orchestrator | 2025-09-20 11:31:51 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:31:53.788296 | orchestrator | 2025-09-20 11:31:53 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) completed with status ACTIVE 2025-09-20 11:31:54.207319 | orchestrator | + compute_list 2025-09-20 11:31:54.207442 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 11:31:57.013195 | orchestrator | +------+--------+----------+ 2025-09-20 11:31:57.013300 | orchestrator | | ID | Name | Status | 2025-09-20 11:31:57.013313 | orchestrator | |------+--------+----------| 2025-09-20 11:31:57.013323 | orchestrator | +------+--------+----------+ 2025-09-20 11:31:57.452767 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 11:32:00.776858 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:32:00.776997 | orchestrator | | ID | Name | Status | 2025-09-20 11:32:00.777012 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:32:00.777024 | orchestrator | | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | test-4 | ACTIVE | 2025-09-20 11:32:00.777035 | orchestrator | | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | test-3 | ACTIVE | 2025-09-20 11:32:00.777046 | orchestrator | | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | test-2 | ACTIVE | 2025-09-20 11:32:00.777057 | orchestrator | | d44177f7-00e6-40b1-9e10-3388faf842b9 | test-1 | ACTIVE | 2025-09-20 11:32:00.777068 | orchestrator | | ba383266-425b-4d74-b766-c13e936ea5bf | test | ACTIVE | 2025-09-20 11:32:00.777079 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:32:01.189131 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 11:32:03.918606 | orchestrator | +------+--------+----------+ 2025-09-20 11:32:03.918832 | orchestrator | | ID | Name | Status | 2025-09-20 11:32:03.918849 | orchestrator | |------+--------+----------| 2025-09-20 11:32:03.918861 | orchestrator | +------+--------+----------+ 2025-09-20 11:32:04.290307 | orchestrator | + server_ping 2025-09-20 11:32:04.291257 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 11:32:04.291291 | orchestrator | ++ tr -d '\r' 2025-09-20 11:32:07.528873 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:32:07.528989 | orchestrator | + ping -c3 192.168.112.104 2025-09-20 11:32:07.540421 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2025-09-20 11:32:07.540467 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=8.90 ms 2025-09-20 11:32:08.535092 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=1.99 ms 2025-09-20 11:32:09.536540 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.99 ms 2025-09-20 11:32:09.536727 | orchestrator | 2025-09-20 11:32:09.536750 | orchestrator | --- 192.168.112.104 ping statistics --- 2025-09-20 11:32:09.536763 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 11:32:09.536774 | orchestrator | rtt min/avg/max/mdev = 1.989/4.293/8.899/3.256 ms 2025-09-20 11:32:09.537302 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:32:09.537411 | orchestrator | + ping -c3 192.168.112.116 2025-09-20 11:32:09.548873 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-09-20 11:32:09.548966 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.59 ms 2025-09-20 11:32:10.546405 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.23 ms 2025-09-20 11:32:11.547669 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.84 ms 2025-09-20 11:32:11.547763 | orchestrator | 2025-09-20 11:32:11.547774 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-09-20 11:32:11.547783 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:32:11.547790 | orchestrator | rtt min/avg/max/mdev = 1.843/3.556/6.592/2.152 ms 2025-09-20 11:32:11.548207 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:32:11.548224 | orchestrator | + ping -c3 192.168.112.135 2025-09-20 11:32:11.564026 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2025-09-20 11:32:11.564132 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=9.88 ms 2025-09-20 11:32:12.557310 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.49 ms 2025-09-20 11:32:13.558337 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.79 ms 2025-09-20 11:32:13.558432 | orchestrator | 2025-09-20 11:32:13.558444 | orchestrator | --- 192.168.112.135 ping statistics --- 2025-09-20 11:32:13.558454 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 11:32:13.558462 | orchestrator | rtt min/avg/max/mdev = 1.788/4.719/9.881/3.661 ms 2025-09-20 11:32:13.558798 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:32:13.558821 | orchestrator | + ping -c3 192.168.112.197 2025-09-20 11:32:13.569303 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2025-09-20 11:32:13.569352 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=5.13 ms 2025-09-20 11:32:14.569197 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.78 ms 2025-09-20 11:32:15.570512 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.76 ms 2025-09-20 11:32:15.570721 | orchestrator | 2025-09-20 11:32:15.570742 | orchestrator | --- 192.168.112.197 ping statistics --- 2025-09-20 11:32:15.570756 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 11:32:15.570768 | orchestrator | rtt min/avg/max/mdev = 1.764/3.226/5.133/1.410 ms 2025-09-20 11:32:15.570862 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:32:15.570878 | orchestrator | + ping -c3 192.168.112.137 2025-09-20 11:32:15.583860 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2025-09-20 11:32:15.583947 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=8.12 ms 2025-09-20 11:32:16.578759 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.29 ms 2025-09-20 11:32:17.580413 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.64 ms 2025-09-20 11:32:17.580515 | orchestrator | 2025-09-20 11:32:17.580528 | orchestrator | --- 192.168.112.137 ping statistics --- 2025-09-20 11:32:17.580551 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 11:32:17.580562 | orchestrator | rtt min/avg/max/mdev = 1.636/4.016/8.120/2.914 ms 2025-09-20 11:32:17.580572 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-09-20 11:32:20.867884 | orchestrator | 2025-09-20 11:32:20 | INFO  | Live migrating server 12bbf371-7c02-4926-9ba2-33e9d11a2031 2025-09-20 11:32:30.542966 | orchestrator | 2025-09-20 11:32:30 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:32.853195 | orchestrator | 2025-09-20 11:32:32 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:35.211380 | orchestrator | 2025-09-20 11:32:35 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:37.457921 | orchestrator | 2025-09-20 11:32:37 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:39.738053 | orchestrator | 2025-09-20 11:32:39 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:42.005268 | orchestrator | 2025-09-20 11:32:42 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:44.262329 | orchestrator | 2025-09-20 11:32:44 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:46.585613 | orchestrator | 2025-09-20 11:32:46 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) is still in progress 2025-09-20 11:32:48.958860 | orchestrator | 2025-09-20 11:32:48 | INFO  | Live migration of 12bbf371-7c02-4926-9ba2-33e9d11a2031 (test-4) completed with status ACTIVE 2025-09-20 11:32:48.958958 | orchestrator | 2025-09-20 11:32:48 | INFO  | Live migrating server cb2c2c27-309e-4c39-b57b-cb15c72dbf8f 2025-09-20 11:32:58.247284 | orchestrator | 2025-09-20 11:32:58 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:00.578569 | orchestrator | 2025-09-20 11:33:00 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:02.827193 | orchestrator | 2025-09-20 11:33:02 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:05.091540 | orchestrator | 2025-09-20 11:33:05 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:07.441301 | orchestrator | 2025-09-20 11:33:07 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:09.697051 | orchestrator | 2025-09-20 11:33:09 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:11.974124 | orchestrator | 2025-09-20 11:33:11 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:14.468218 | orchestrator | 2025-09-20 11:33:14 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) is still in progress 2025-09-20 11:33:16.730134 | orchestrator | 2025-09-20 11:33:16 | INFO  | Live migration of cb2c2c27-309e-4c39-b57b-cb15c72dbf8f (test-3) completed with status ACTIVE 2025-09-20 11:33:16.730264 | orchestrator | 2025-09-20 11:33:16 | INFO  | Live migrating server 8be32264-c85a-485a-bc17-6b05e3d5e2ea 2025-09-20 11:33:26.509398 | orchestrator | 2025-09-20 11:33:26 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:28.826649 | orchestrator | 2025-09-20 11:33:28 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:31.174912 | orchestrator | 2025-09-20 11:33:31 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:33.518488 | orchestrator | 2025-09-20 11:33:33 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:35.855663 | orchestrator | 2025-09-20 11:33:35 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:38.135240 | orchestrator | 2025-09-20 11:33:38 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:40.442915 | orchestrator | 2025-09-20 11:33:40 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:42.776110 | orchestrator | 2025-09-20 11:33:42 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) is still in progress 2025-09-20 11:33:45.123029 | orchestrator | 2025-09-20 11:33:45 | INFO  | Live migration of 8be32264-c85a-485a-bc17-6b05e3d5e2ea (test-2) completed with status ACTIVE 2025-09-20 11:33:45.123125 | orchestrator | 2025-09-20 11:33:45 | INFO  | Live migrating server d44177f7-00e6-40b1-9e10-3388faf842b9 2025-09-20 11:33:56.276393 | orchestrator | 2025-09-20 11:33:56 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:33:58.654704 | orchestrator | 2025-09-20 11:33:58 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:01.021039 | orchestrator | 2025-09-20 11:34:01 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:03.289706 | orchestrator | 2025-09-20 11:34:03 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:05.555978 | orchestrator | 2025-09-20 11:34:05 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:07.882820 | orchestrator | 2025-09-20 11:34:07 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:10.155706 | orchestrator | 2025-09-20 11:34:10 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:12.433911 | orchestrator | 2025-09-20 11:34:12 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) is still in progress 2025-09-20 11:34:14.669345 | orchestrator | 2025-09-20 11:34:14 | INFO  | Live migration of d44177f7-00e6-40b1-9e10-3388faf842b9 (test-1) completed with status ACTIVE 2025-09-20 11:34:14.669446 | orchestrator | 2025-09-20 11:34:14 | INFO  | Live migrating server ba383266-425b-4d74-b766-c13e936ea5bf 2025-09-20 11:34:24.857180 | orchestrator | 2025-09-20 11:34:24 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:27.182919 | orchestrator | 2025-09-20 11:34:27 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:29.539285 | orchestrator | 2025-09-20 11:34:29 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:31.866779 | orchestrator | 2025-09-20 11:34:31 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:34.183244 | orchestrator | 2025-09-20 11:34:34 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:36.459495 | orchestrator | 2025-09-20 11:34:36 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:38.702330 | orchestrator | 2025-09-20 11:34:38 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:40.966507 | orchestrator | 2025-09-20 11:34:40 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:43.216080 | orchestrator | 2025-09-20 11:34:43 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) is still in progress 2025-09-20 11:34:45.490515 | orchestrator | 2025-09-20 11:34:45 | INFO  | Live migration of ba383266-425b-4d74-b766-c13e936ea5bf (test) completed with status ACTIVE 2025-09-20 11:34:45.724636 | orchestrator | + compute_list 2025-09-20 11:34:45.724730 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 11:34:48.325177 | orchestrator | +------+--------+----------+ 2025-09-20 11:34:48.325278 | orchestrator | | ID | Name | Status | 2025-09-20 11:34:48.325290 | orchestrator | |------+--------+----------| 2025-09-20 11:34:48.325300 | orchestrator | +------+--------+----------+ 2025-09-20 11:34:48.558326 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 11:34:51.053653 | orchestrator | +------+--------+----------+ 2025-09-20 11:34:51.053858 | orchestrator | | ID | Name | Status | 2025-09-20 11:34:51.053888 | orchestrator | |------+--------+----------| 2025-09-20 11:34:51.053902 | orchestrator | +------+--------+----------+ 2025-09-20 11:34:51.269618 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 11:34:54.213118 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:34:54.213224 | orchestrator | | ID | Name | Status | 2025-09-20 11:34:54.213238 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 11:34:54.213249 | orchestrator | | 12bbf371-7c02-4926-9ba2-33e9d11a2031 | test-4 | ACTIVE | 2025-09-20 11:34:54.213260 | orchestrator | | cb2c2c27-309e-4c39-b57b-cb15c72dbf8f | test-3 | ACTIVE | 2025-09-20 11:34:54.213271 | orchestrator | | 8be32264-c85a-485a-bc17-6b05e3d5e2ea | test-2 | ACTIVE | 2025-09-20 11:34:54.213282 | orchestrator | | d44177f7-00e6-40b1-9e10-3388faf842b9 | test-1 | ACTIVE | 2025-09-20 11:34:54.213293 | orchestrator | | ba383266-425b-4d74-b766-c13e936ea5bf | test | ACTIVE | 2025-09-20 11:34:54.213304 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 11:34:54.644421 | orchestrator | + server_ping 2025-09-20 11:34:54.645745 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 11:34:54.645856 | orchestrator | ++ tr -d '\r' 2025-09-20 11:34:57.682120 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:34:57.682224 | orchestrator | + ping -c3 192.168.112.104 2025-09-20 11:34:57.692188 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2025-09-20 11:34:57.692211 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=8.07 ms 2025-09-20 11:34:58.688702 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.49 ms 2025-09-20 11:34:59.690304 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.95 ms 2025-09-20 11:34:59.690404 | orchestrator | 2025-09-20 11:34:59.690419 | orchestrator | --- 192.168.112.104 ping statistics --- 2025-09-20 11:34:59.690431 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 11:34:59.690442 | orchestrator | rtt min/avg/max/mdev = 1.945/4.168/8.070/2.768 ms 2025-09-20 11:34:59.690905 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:34:59.691439 | orchestrator | + ping -c3 192.168.112.116 2025-09-20 11:34:59.701285 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-09-20 11:34:59.701361 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.43 ms 2025-09-20 11:35:00.699006 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.11 ms 2025-09-20 11:35:01.700503 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.74 ms 2025-09-20 11:35:01.700620 | orchestrator | 2025-09-20 11:35:01.700646 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-09-20 11:35:01.700665 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:35:01.700709 | orchestrator | rtt min/avg/max/mdev = 1.740/3.425/6.425/2.126 ms 2025-09-20 11:35:01.700730 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:35:01.700751 | orchestrator | + ping -c3 192.168.112.135 2025-09-20 11:35:01.712581 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2025-09-20 11:35:01.712688 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=6.51 ms 2025-09-20 11:35:02.710739 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.68 ms 2025-09-20 11:35:03.712400 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.97 ms 2025-09-20 11:35:03.712510 | orchestrator | 2025-09-20 11:35:03.712532 | orchestrator | --- 192.168.112.135 ping statistics --- 2025-09-20 11:35:03.712550 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 11:35:03.712567 | orchestrator | rtt min/avg/max/mdev = 1.965/3.716/6.509/1.995 ms 2025-09-20 11:35:03.712584 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:35:03.712601 | orchestrator | + ping -c3 192.168.112.197 2025-09-20 11:35:03.726807 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2025-09-20 11:35:03.726845 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=9.49 ms 2025-09-20 11:35:04.721306 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.50 ms 2025-09-20 11:35:05.723330 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.69 ms 2025-09-20 11:35:05.723431 | orchestrator | 2025-09-20 11:35:05.723446 | orchestrator | --- 192.168.112.197 ping statistics --- 2025-09-20 11:35:05.723459 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:35:05.723471 | orchestrator | rtt min/avg/max/mdev = 1.693/4.563/9.494/3.501 ms 2025-09-20 11:35:05.723483 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 11:35:05.723494 | orchestrator | + ping -c3 192.168.112.137 2025-09-20 11:35:05.734274 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2025-09-20 11:35:05.734325 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.36 ms 2025-09-20 11:35:06.731968 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.10 ms 2025-09-20 11:35:07.732746 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.50 ms 2025-09-20 11:35:07.733584 | orchestrator | 2025-09-20 11:35:07.733648 | orchestrator | --- 192.168.112.137 ping statistics --- 2025-09-20 11:35:07.733663 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 11:35:07.733675 | orchestrator | rtt min/avg/max/mdev = 1.502/3.321/6.358/2.161 ms 2025-09-20 11:35:08.242052 | orchestrator | ok: Runtime: 0:19:41.612442 2025-09-20 11:35:08.287990 | 2025-09-20 11:35:08.288171 | TASK [Run tempest] 2025-09-20 11:35:08.823663 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:08.840171 | 2025-09-20 11:35:08.840346 | TASK [Check prometheus alert status] 2025-09-20 11:35:09.375847 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:09.377566 | 2025-09-20 11:35:09.377669 | PLAY RECAP 2025-09-20 11:35:09.377737 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-20 11:35:09.377764 | 2025-09-20 11:35:09.584491 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-20 11:35:09.586026 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-20 11:35:10.329002 | 2025-09-20 11:35:10.329168 | PLAY [Post output play] 2025-09-20 11:35:10.345112 | 2025-09-20 11:35:10.345262 | LOOP [stage-output : Register sources] 2025-09-20 11:35:10.415362 | 2025-09-20 11:35:10.415671 | TASK [stage-output : Check sudo] 2025-09-20 11:35:11.206882 | orchestrator | sudo: a password is required 2025-09-20 11:35:11.457454 | orchestrator | ok: Runtime: 0:00:00.009418 2025-09-20 11:35:11.473136 | 2025-09-20 11:35:11.473290 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-20 11:35:11.516073 | 2025-09-20 11:35:11.516404 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-20 11:35:11.586203 | orchestrator | ok 2025-09-20 11:35:11.596283 | 2025-09-20 11:35:11.596506 | LOOP [stage-output : Ensure target folders exist] 2025-09-20 11:35:12.044187 | orchestrator | ok: "docs" 2025-09-20 11:35:12.044565 | 2025-09-20 11:35:12.286216 | orchestrator | ok: "artifacts" 2025-09-20 11:35:12.526692 | orchestrator | ok: "logs" 2025-09-20 11:35:12.550214 | 2025-09-20 11:35:12.550466 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-20 11:35:12.589472 | 2025-09-20 11:35:12.589738 | TASK [stage-output : Make all log files readable] 2025-09-20 11:35:12.897698 | orchestrator | ok 2025-09-20 11:35:12.906717 | 2025-09-20 11:35:12.906905 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-20 11:35:12.943384 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:12.956194 | 2025-09-20 11:35:12.956407 | TASK [stage-output : Discover log files for compression] 2025-09-20 11:35:12.981074 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:12.997189 | 2025-09-20 11:35:12.997420 | LOOP [stage-output : Archive everything from logs] 2025-09-20 11:35:13.042487 | 2025-09-20 11:35:13.042693 | PLAY [Post cleanup play] 2025-09-20 11:35:13.051249 | 2025-09-20 11:35:13.051389 | TASK [Set cloud fact (Zuul deployment)] 2025-09-20 11:35:13.140323 | orchestrator | ok 2025-09-20 11:35:13.152257 | 2025-09-20 11:35:13.152442 | TASK [Set cloud fact (local deployment)] 2025-09-20 11:35:13.186702 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:13.198005 | 2025-09-20 11:35:13.198119 | TASK [Clean the cloud environment] 2025-09-20 11:35:13.962551 | orchestrator | 2025-09-20 11:35:13 - clean up servers 2025-09-20 11:35:14.755452 | orchestrator | 2025-09-20 11:35:14 - testbed-manager 2025-09-20 11:35:14.845973 | orchestrator | 2025-09-20 11:35:14 - testbed-node-4 2025-09-20 11:35:14.937276 | orchestrator | 2025-09-20 11:35:14 - testbed-node-3 2025-09-20 11:35:15.027751 | orchestrator | 2025-09-20 11:35:15 - testbed-node-5 2025-09-20 11:35:15.125530 | orchestrator | 2025-09-20 11:35:15 - testbed-node-2 2025-09-20 11:35:15.215922 | orchestrator | 2025-09-20 11:35:15 - testbed-node-0 2025-09-20 11:35:15.306464 | orchestrator | 2025-09-20 11:35:15 - testbed-node-1 2025-09-20 11:35:15.395870 | orchestrator | 2025-09-20 11:35:15 - clean up keypairs 2025-09-20 11:35:15.412071 | orchestrator | 2025-09-20 11:35:15 - testbed 2025-09-20 11:35:15.435697 | orchestrator | 2025-09-20 11:35:15 - wait for servers to be gone 2025-09-20 11:35:24.273466 | orchestrator | 2025-09-20 11:35:24 - clean up ports 2025-09-20 11:35:24.441337 | orchestrator | 2025-09-20 11:35:24 - 352c89a8-92e2-45e7-af61-09402d59a70b 2025-09-20 11:35:24.719407 | orchestrator | 2025-09-20 11:35:24 - 3f41e4a7-834d-4dcc-b1e8-840728e88237 2025-09-20 11:35:25.627374 | orchestrator | 2025-09-20 11:35:25 - 5225757f-e16e-422c-b731-a9d2af464405 2025-09-20 11:35:25.871371 | orchestrator | 2025-09-20 11:35:25 - 9b3e8a65-afdd-4550-bced-6b3a70839671 2025-09-20 11:35:26.078533 | orchestrator | 2025-09-20 11:35:26 - a95b3186-84be-4b3e-af09-8682aab69bb9 2025-09-20 11:35:26.276951 | orchestrator | 2025-09-20 11:35:26 - c28cce0b-b065-49bd-acf5-b82d9da8cfd7 2025-09-20 11:35:26.481703 | orchestrator | 2025-09-20 11:35:26 - ee3e8c74-9400-4ae8-a1fa-2039bcf5672c 2025-09-20 11:35:26.686141 | orchestrator | 2025-09-20 11:35:26 - clean up volumes 2025-09-20 11:35:26.801861 | orchestrator | 2025-09-20 11:35:26 - testbed-volume-2-node-base 2025-09-20 11:35:26.837668 | orchestrator | 2025-09-20 11:35:26 - testbed-volume-5-node-base 2025-09-20 11:35:26.882198 | orchestrator | 2025-09-20 11:35:26 - testbed-volume-0-node-base 2025-09-20 11:35:26.923687 | orchestrator | 2025-09-20 11:35:26 - testbed-volume-3-node-base 2025-09-20 11:35:26.969947 | orchestrator | 2025-09-20 11:35:26 - testbed-volume-4-node-base 2025-09-20 11:35:27.015964 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-1-node-base 2025-09-20 11:35:27.058238 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-manager-base 2025-09-20 11:35:27.127871 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-5-node-5 2025-09-20 11:35:27.171320 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-8-node-5 2025-09-20 11:35:27.214382 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-7-node-4 2025-09-20 11:35:27.260930 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-6-node-3 2025-09-20 11:35:27.308642 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-0-node-3 2025-09-20 11:35:27.349330 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-3-node-3 2025-09-20 11:35:27.394997 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-2-node-5 2025-09-20 11:35:27.435406 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-1-node-4 2025-09-20 11:35:27.483881 | orchestrator | 2025-09-20 11:35:27 - testbed-volume-4-node-4 2025-09-20 11:35:27.528263 | orchestrator | 2025-09-20 11:35:27 - disconnect routers 2025-09-20 11:35:27.596295 | orchestrator | 2025-09-20 11:35:27 - testbed 2025-09-20 11:35:28.743440 | orchestrator | 2025-09-20 11:35:28 - clean up subnets 2025-09-20 11:35:28.795171 | orchestrator | 2025-09-20 11:35:28 - subnet-testbed-management 2025-09-20 11:35:28.954389 | orchestrator | 2025-09-20 11:35:28 - clean up networks 2025-09-20 11:35:29.126099 | orchestrator | 2025-09-20 11:35:29 - net-testbed-management 2025-09-20 11:35:29.397250 | orchestrator | 2025-09-20 11:35:29 - clean up security groups 2025-09-20 11:35:29.437762 | orchestrator | 2025-09-20 11:35:29 - testbed-management 2025-09-20 11:35:29.555882 | orchestrator | 2025-09-20 11:35:29 - testbed-node 2025-09-20 11:35:29.674201 | orchestrator | 2025-09-20 11:35:29 - clean up floating ips 2025-09-20 11:35:29.707624 | orchestrator | 2025-09-20 11:35:29 - 81.163.192.43 2025-09-20 11:35:30.048623 | orchestrator | 2025-09-20 11:35:30 - clean up routers 2025-09-20 11:35:30.103684 | orchestrator | 2025-09-20 11:35:30 - testbed 2025-09-20 11:35:31.248081 | orchestrator | ok: Runtime: 0:00:17.424939 2025-09-20 11:35:31.252384 | 2025-09-20 11:35:31.252563 | PLAY RECAP 2025-09-20 11:35:31.252693 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-20 11:35:31.252754 | 2025-09-20 11:35:31.383933 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-20 11:35:31.386223 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-20 11:35:32.122368 | 2025-09-20 11:35:32.122523 | PLAY [Cleanup play] 2025-09-20 11:35:32.138753 | 2025-09-20 11:35:32.138910 | TASK [Set cloud fact (Zuul deployment)] 2025-09-20 11:35:32.190119 | orchestrator | ok 2025-09-20 11:35:32.200408 | 2025-09-20 11:35:32.200577 | TASK [Set cloud fact (local deployment)] 2025-09-20 11:35:32.235507 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:32.251563 | 2025-09-20 11:35:32.251700 | TASK [Clean the cloud environment] 2025-09-20 11:35:33.354002 | orchestrator | 2025-09-20 11:35:33 - clean up servers 2025-09-20 11:35:33.831735 | orchestrator | 2025-09-20 11:35:33 - clean up keypairs 2025-09-20 11:35:33.846964 | orchestrator | 2025-09-20 11:35:33 - wait for servers to be gone 2025-09-20 11:35:33.888249 | orchestrator | 2025-09-20 11:35:33 - clean up ports 2025-09-20 11:35:33.955501 | orchestrator | 2025-09-20 11:35:33 - clean up volumes 2025-09-20 11:35:34.016134 | orchestrator | 2025-09-20 11:35:34 - disconnect routers 2025-09-20 11:35:34.044682 | orchestrator | 2025-09-20 11:35:34 - clean up subnets 2025-09-20 11:35:34.063631 | orchestrator | 2025-09-20 11:35:34 - clean up networks 2025-09-20 11:35:34.223077 | orchestrator | 2025-09-20 11:35:34 - clean up security groups 2025-09-20 11:35:34.263437 | orchestrator | 2025-09-20 11:35:34 - clean up floating ips 2025-09-20 11:35:34.287757 | orchestrator | 2025-09-20 11:35:34 - clean up routers 2025-09-20 11:35:34.787741 | orchestrator | ok: Runtime: 0:00:01.320664 2025-09-20 11:35:34.791594 | 2025-09-20 11:35:34.791736 | PLAY RECAP 2025-09-20 11:35:34.791843 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-20 11:35:34.791902 | 2025-09-20 11:35:34.910532 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-20 11:35:34.911537 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-20 11:35:35.636427 | 2025-09-20 11:35:35.636587 | PLAY [Base post-fetch] 2025-09-20 11:35:35.652052 | 2025-09-20 11:35:35.652178 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-20 11:35:35.707853 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:35.723506 | 2025-09-20 11:35:35.723705 | TASK [fetch-output : Set log path for single node] 2025-09-20 11:35:35.770668 | orchestrator | ok 2025-09-20 11:35:35.785519 | 2025-09-20 11:35:35.785659 | LOOP [fetch-output : Ensure local output dirs] 2025-09-20 11:35:36.257851 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/work/logs" 2025-09-20 11:35:36.522181 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/work/artifacts" 2025-09-20 11:35:36.787481 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/efda62675dd1481981f34b8801f8b340/work/docs" 2025-09-20 11:35:36.812531 | 2025-09-20 11:35:36.812776 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-20 11:35:37.703950 | orchestrator | changed: .d..t...... ./ 2025-09-20 11:35:37.704266 | orchestrator | changed: All items complete 2025-09-20 11:35:37.704339 | 2025-09-20 11:35:38.403500 | orchestrator | changed: .d..t...... ./ 2025-09-20 11:35:39.109351 | orchestrator | changed: .d..t...... ./ 2025-09-20 11:35:39.141728 | 2025-09-20 11:35:39.141895 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-20 11:35:39.169472 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:39.172036 | orchestrator | skipping: Conditional result was False 2025-09-20 11:35:39.196363 | 2025-09-20 11:35:39.196495 | PLAY RECAP 2025-09-20 11:35:39.196576 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-20 11:35:39.196618 | 2025-09-20 11:35:39.316915 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-20 11:35:39.319228 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-20 11:35:40.034144 | 2025-09-20 11:35:40.034301 | PLAY [Base post] 2025-09-20 11:35:40.048928 | 2025-09-20 11:35:40.049062 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-20 11:35:40.985792 | orchestrator | changed 2025-09-20 11:35:40.994176 | 2025-09-20 11:35:40.994289 | PLAY RECAP 2025-09-20 11:35:40.994376 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-20 11:35:40.994438 | 2025-09-20 11:35:41.113030 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-20 11:35:41.115397 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-20 11:35:41.890399 | 2025-09-20 11:35:41.890572 | PLAY [Base post-logs] 2025-09-20 11:35:41.901139 | 2025-09-20 11:35:41.901279 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-20 11:35:42.353432 | localhost | changed 2025-09-20 11:35:42.363653 | 2025-09-20 11:35:42.363806 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-20 11:35:42.400431 | localhost | ok 2025-09-20 11:35:42.405268 | 2025-09-20 11:35:42.405442 | TASK [Set zuul-log-path fact] 2025-09-20 11:35:42.422200 | localhost | ok 2025-09-20 11:35:42.432074 | 2025-09-20 11:35:42.432217 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-20 11:35:42.459937 | localhost | ok 2025-09-20 11:35:42.463072 | 2025-09-20 11:35:42.463177 | TASK [upload-logs : Create log directories] 2025-09-20 11:35:42.966796 | localhost | changed 2025-09-20 11:35:42.969775 | 2025-09-20 11:35:42.969887 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-20 11:35:43.490479 | localhost -> localhost | ok: Runtime: 0:00:00.006593 2025-09-20 11:35:43.494785 | 2025-09-20 11:35:43.494930 | TASK [upload-logs : Upload logs to log server] 2025-09-20 11:35:44.049986 | localhost | Output suppressed because no_log was given 2025-09-20 11:35:44.054406 | 2025-09-20 11:35:44.054618 | LOOP [upload-logs : Compress console log and json output] 2025-09-20 11:35:44.106264 | localhost | skipping: Conditional result was False 2025-09-20 11:35:44.113828 | localhost | skipping: Conditional result was False 2025-09-20 11:35:44.124301 | 2025-09-20 11:35:44.124476 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-20 11:35:44.168198 | localhost | skipping: Conditional result was False 2025-09-20 11:35:44.168478 | 2025-09-20 11:35:44.173854 | localhost | skipping: Conditional result was False 2025-09-20 11:35:44.186022 | 2025-09-20 11:35:44.186286 | LOOP [upload-logs : Upload console log and json output]